sahara: a revolutionary service architecture for future telecommunications systems
DESCRIPTION
SAHARA: A Revolutionary Service Architecture for Future Telecommunications Systems. Randy H. Katz, Anthony Joseph Computer Science Division Electrical Engineering and Computer Science Department University of California, Berkeley Berkeley, CA 94720-1776. Project Goals. - PowerPoint PPT PresentationTRANSCRIPT
SAHARA: A Revolutionary Service Architecture for Future Telecommunications Systems
Randy H. Katz, Anthony JosephComputer Science Division
Electrical Engineering and Computer Science DepartmentUniversity of California, Berkeley
Berkeley, CA 94720-1776
Project Goals• Delivery of end-to-end services with
desirable properties (e.g., performance, reliability, “qualities”), provided by multiple potentially distrusting service providers
• Architectural framework for– Economics-based resource allocation– Third-party mediators, such as Clearinghouses– Dynamic formation of service confederations– Support for diverse business models
Presentation Outline• Motivation• Project SAHARA• Initial Investigations• Testbeds• Summary and Conclusions
Presentation Outline• Motivation• Project SAHARA• Initial Investigations• Testbeds• Summary and Conclusions
The Huge Expense of New Telecomms Infrastructures
• European auctions for 3G spectrum: 50 billion ECU and counting
• Capital outlays likely to match spectrum expenses, all before the first ECU of revenue!
• Compelling motivation for collaborative deployment of wireless infrastructure
Any Way to Builda Network?
• Partitioning of frequencies independent of actual subscriber density– Successful operator oversubscribe resources,
while less popular providers retain excess capacity– Different flavor of roaming: among
collocated/competing service providing• Duplicate antenna sites
– Serious problem given community resistance• Redundant backhaul networks
– Limited economies of scale
The Case for Horizontal Architectures
“The new rules for success will be to provide one part of the puzzle and to cooperate with other suppliers to create the complete solutions that customers require. ... [V]ertical integration breaks down when innovation speeds up. The big telecoms firms that will win back investor confidence soonest will be those with the courage to rip apart their monolithic structure along functional layers, to swap size for speed and to embrace rather than fear disruptive technologies.”
The Economist Magazine, 16 December 2000
Global Packet Network Internetworking
(Connectivity)ISP
CLEC
Horizontal Internet Service Business Model
Application-specificOverlay Networks
(Multicast Tunnels, Mgmt Svrcs)
Applications(Portals, E-Commerce,
E-Tainment, Media)
Application-specific Servers(Streaming Media, Transformation)ASP
InternetData Centers
Appl Infrastructure Services(Distribution, Caching,
Searching, Hosting)AIPISV
Feasible Alternative: Horizontal Competition vs. Vertical
Integration• Service Operators “own” the
customer, provide “brand”, issue/collect the bills
• Independent Backhaul Operators• Independent Antenna Site Operators• Independent Owners of the Spectrum• Microscale auctions/leases of network
resources• Emerging concept of Virtual Operators
VirtualOperator
• Local premise owner deploys own access infra-structure– Better coverage/more rapid build out of network– Deployments in airports, hotels, conference centers,
office buildings, campuses, …• Overlay service provider (e.g., PBMS) vs.
organizational service provider (e.g., UCB IS&T)– Single bill/settle with service participants
• Support for confederated/virtual devices– Mini-BS for cellular/data + WLAN for high rate data
Presentation Outline• Motivation• Project SAHARA• Initial Investigations• Testbeds• Summary and Conclusions
The “Sahara” Project• Service• Architecture for• Heterogeneous• Access,• Resources, and• Applications
SAHARA Assumptions• Dynamic confederations to better share resources
& deploy access/achieve regional coverage more rapidly
• Scarce resources efficiently allocated using dynamic “market-driven” mechanisms
• Trusted third partners manage resource marketplace in a fair, unbiased, audited and verifiable basis
• Vertical stovepipe replaced by horizontally organized “multi-providers,” open to increased competition and more efficient allocation of resources
Architectural Elements• “Open” service/resource allocation model
– Independent service creation, establishment, placement, in overlapping domains
– Resources, capabilities, status described/exchanged amongst confederates, via enhanced capability negotiation
– Allocation based on economic methods, such as congestion pricing, dynamic marketplaces/auctions
– Trust management among participants, based on trusted third party monitors
Architectural Elements• Forming dynamic confederations
– Discovering potential confederates– Establishing trust relationships– Managing transitive trust relationships &
levels of transparency– Not all confederates need be competitors--
heterogeneous, collocated access networks to better support applications
Architectural Elements• Alternative View: Service Brokering
– Dynamically construct overlays on component services provided by underlying service providers
• E.g., overlay network segments with desirable performance attributes
• E.g., construct end-to-end multicast trees from subtrees in different service provider clouds
– Redirect to alternative service instances• E.g., choose instance based on distance, network
load, server load, trust relationships, resilience to network failure, …
Deliverables• Architecture and Mechanisms for
– Fine grain market-driven resource allocation– Application awareness in decision making
• Confederations and Trust Management– Dynamic marshalling, observation/verification of
participant behaviors, dissolution of confederations– Mechanisms to “audit” third party resource
allocations, insuring fairness and freedom from bias in operation
• New Handoff Concepts Based on Redirection– Not just network handoff for lower cost access– Also alternative service provider to balance loads
Research Methodology
• Evaluate existing system to discover bottlenecks• Analyze alternatives to select among approaches• Prototype selected alternatives to understand
implementation complexities• Repeat
Analyze & Design
PrototypeEvaluate
Presentation Outline• Motivation• Project SAHARA• Initial Investigations• Testbeds• Summary and Conclusions
Initial Investigations• Congestion-Based Pricing
– Economics-based resource allocation • Clearinghouse Architecture
– Trusted Resource Mediators– Measurement-based Admission Control with
traffic policing• Service Composition
– Achieving performance, reliability from multiple placed service instances
Congestion-Based Pricing• Hypothesis: Dynamic pricing influences
user behavior– E.g., shorten/defer call sessions;
accept lower audio/video QoS• Critical resource reaches congestion
levels, modify prices to drive utilization back to “acceptable” levels– E.g., available bandwidth, time slots,
number of simultaneous sessions
Computer Telephony Services (CTS) Testbed
• E.g., Dialpad.com & Net-to-Phone• Gateways as bottlenecks (limited PSTN access lines)• Use congestion pricing (CP) to entice users to
– Talk shorter– Talk later– Accept lower quality
Internet-to-PSTN
Gateways
Internet PSTN
Berkeley User Study• Goal: determine effectiveness of CP• Figure of merits
– Maximize utilization (service not idling)– Reduce provisioning– Reduce congestion (reduced blocking probability)
• Users acceptance/reactions to CP– Talk shorter– Wait– Defer talk at another time– Use alternative access device– Use reduced connection qualities
Experiments• Vary Price, Quality, Interval of Price Changes• Experiments
– Congestion pricing: rate depends on current load– Flat rate pricing: same rate all the time– Time-of-day pricing: higher rate during peak-hours– Call-duration pricing: higher rate for long duration
calls– Access-device pricing: higher rate for using a phone
instead of a computer
Experimental Setup & Limitations
• Computers vs. phones to make/receive free phone calls• Different pricing policies: 1000 tokens/week• RT pricing, connection quality & accounting information
Flat Rate Versus Time-of-dayFlat Rate Pricing: Calling Pattern in Minutes
020406080100120140160
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23
Time of Day (Hour)Nu
mbe
r of M
inut
es
Time of Day Pricing: Calling Pattern in Minutes
020406080100120140160
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23
Time of Day (Hour)
Num
ber o
f Min
utes
Peak hours from7-11pm
Peak shifted! High bursts right before & right after peak hours
Initial Results• Call-duration pricing
– Hypothesis: Less long duration calls & more short duration calls
– Result: fewer long duration calls, but no increase in short duration calls
• Congestion pricing– Congestion: two or more simultaneous users– Hypothesis: Talk less when encounter CP– Result: Each user used service for 8.44 minutes
(standard error 11.3) more. Observed reduction in call session when CP encountered: 2.31 minutes (2.68) less.
– Not statistically significant (t-test)– Not enough users to cause much congestion
Preliminary Findings• Feasible to implement/use CP in real system• Pricing better utilizes existing resources,
reduces congestion• CP is better than other pricing policies• Based on surveys, users prefer CP to flat rate
pricing if its average rate is lower– Service providers can better utilize existing resources
by providing users with incentives to use CP• Limitations
– Too few users– Only apply to telecommunication services
Clearinghouse
Vision: data, multimedia (video, voice, etc.) and mobile applications over one IP-network
Video conferencing,Distance learning
Web surfing, emails,TCP connectionsIP Based
Core
PSTN
VoIP (e.g. Netmeeting)
H.323 Gateway
GSM
Wireless Phones
Question: How to regulate resource allocation within and across multiple domains in a scalable manner to achieve end-to-end QoS?
Clearinghouse Goals• Design/build distributed control architecture for
scalable resource provisioning– Predictive reservations across multiple domains– Admission control & traffic policing at edge
• Demonstrate architecture’s properties and performance– Achieve adequate performance w/o edge per-flow state– Robust against traffic fluctuations and misbehaving flows
• Prototype proposed mechanisms – Min edge router overhead for scalability/ease of
deployment
Clearinghouse Architecture• Clearinghouse distributed architecture--
each CH-node serves as a resource manager
• Functionalities– Monitors network performance on ingress &
egress links– Estimates traffic demand distributions– Adapts trunk/aggregate reservations within &
across domains based on traffic statistics– Performs admission control based on
estimated traffic matrix – Coordinates traffic policing at ingress & egress
points for detecting misbehaving flows
ISP 1
Multiple-ISP Scenario
ISP n
Host
Host
ISP 2
ISP mIngress Router
Egress RouterIR
IR
ER
ER
• Hybrid of flat and hierarchical structures – Local hierarchy within large ISPs
• Distribute network state to various CH-nodes and reduces the amount of state information maintained
– Flat structure for peer-to-peer relationships across independent ISPs
IllustrationHost
ISP1
EdgeRouter
CH1
• A hierarchy of Logical domains (LDs)– e.g., LD0 can be a POP or a group of neighboring POPs
CHo CHo
LD0
LD1
LD0
• A CH-node is associated with each LD– Maintains resource allocations between ingress-egress pairs– Estimates traffic demand distributions & updates parent CH-
nodes
Host
ISP1
EdgeRouter
CH1
CHo CHo
LD0
LD1
LD0
Illustration
• Parent CH-node– Adapt trunk reservations across LDs for aggregate traffic
within ISP
Peer-Peer
ISP n
Host
ISP m
CH1
CH1
• Appears flat at the top level– Coordinate peer-to-peer trunk reservations across multiple
ISPs
Key Design Decisions• Service model: ingress/egress routers as endpoints
– IE-Pipe(s,d) = aggregate traffic entering an ISP domain at IR-s, and exits at ER-d
• Reservations set-up for aggregated flows on intra- and inter-domain links– Adapt dynamically to track traffic fluctuation– Core routers stateless; edge maintain aggregate states
• Traffic monitoring, admission control, traffic policing for individual flows performed at the edge– Access routers have smaller routing tables; experience
lower aggregation of traffic relative to backbone routers– Most congestion (packet loss/delay) happens at edges
Traffic-Matrix Admission Control• Mods to edge routers
– Traffic monitors passively measure aggregate rate of existing flows, M(s,d)
– IR-s forwards control messages (Request/Accept/Reject) between CH and host/proxy
– Estimate traffic demand distributions, D(s,:), and report to the CH
POP 1
AHost Network
IR-s
Host Network
POP 2
ER-dB
Traffic Monitor
CH
Rnew
Accept or Reject
• CH– Leverages knowledge of
topology and traffic matrix to make admission decisions
Group Policing for Malicious Flow Detection
• CH assigns Fid if the flow is admitted– Let FidIn = x, FidEg = y
POP 1
A
IR-s
Host Network
POP 2
ER-dB
CH
TBF Traffic Policer * Traffic Policer at IR or ER only maintains total allocated bandwidth to the group (aggregate state) and not per-flow reservation status
Update TBFs
Request
Accept (with Fid)
TBF for group-x
x yx ax b
Traffic Policer at IR-s aggregate flows based on FidIn for group policing
x y
t yw y TBF for group-y
Traffic Policer at ER-d aggregate flows based on FidEg for group policing
Service Composition• Assumptions
– Providers deploy services throughout network– Portals constructed via service composition
• Quickly enable new functionality on new devices• Possibly through SLAs
– Code is initially non-mobile• Service placement managed: fixed locations, evolves
slowly– New services created via composition
• Across service providers in wide-area: service-level path
Service Composition
Provider Q
Textto
speech
Provider R
CellularPhone
Emailrepository
Provider A Video-on-demandserver
Provider B
ThinClient
Provider A
Provider B
Replicated instancesTranscoder
Architecture for Service Composition and
ManagementComposed services
Hardware platform
Peering relations,Overlay network
Service clusters
Logical platform
Application plane
Handling failures
Service-levelpath creation
Servicelocation
Networkperformance
Detection
Recovery
ArchitectureInternet
Service cluster: compute cluster capable
of running services
Peering: monitoring
& cascading
Destination
Source
Composedservices
Hardware platform
Peering relations,Overlay network
Serviceclusters
Logical platform
Application plane • Overlay nodes are clusters
– Compute platform– Hierarchical monitoring
– Overlay network provides context for service-level path creation & failure handling
Service-Level Path Creation• Connection-oriented network
– Explicit session setup plus state at intermediate nodes
– Connection-less protocol for connection setup• Three levels of information exchange
– Network path liveness• Low overhead, but very frequent
– Performance Metrics: latency/bandwidth• Higher overhead, not so frequent• Bandwidth changes only once in several minutes• Latency changes appreciably only once an hour
– Information about service location in clusters• Bulky, but does not change very often • Also use independent service location mechanism
Service-Level Path Creation• Link-state algorithm for info exchange
– Reduced measurement overhead: finer time-scales– Service-level path created at entry node– Allows all-pair-shortest-path calculation in the graph– Path caching
• Remember what previous clients used• Another use of clusters
– Dynamic path optimization• Since session-transfer is a first-order feature• First path created need not be optimal
Session Recovery: Design Tradeoffs
• End-to-end:– Pre-establishment
possible– But, failure information
has to propagate– Performance of alternate
path could have changed• Local-link:
– No need for information to propagate
– But, additional overheadOverlay n/w
Handling failures
Service-levelpath creation
ServicelocationNetwork
performanceDetection
Recovery
Findingentry/exit
The Overlay Topology: Design Factors
• How many nodes?– Large number of nodes implies reduced latency overhead– But scaling concerns
• Where to place nodes?– Close to edges so that hosts have points of entry and exit
close to them– Close to backbone to take advantage of good
connectivity• Who to peer with?
– Nature of connectivity– Least sharing of physical links among overlay links
Presentation Outline• Motivation• Project SAHARA• Initial Investigations• Testbeds• Summary and Conclusions
Testbeds at Different Scale• Room-scale
– Bluetooth devices working as ensembles, cooperatively sharing bandwidth within microcell
– Inherent trust, but finer grained intelligent and active allocation as opposed to etiquette rules
– How lightweight? Too heavyweight for Bluetooth?• Building-scale
– Multiple wireless LAN “operators” in building– Experiment with “evil operators”; third party audit
mechanisms to determine offender– GoN offers alternative telephony, dynamic allocation of
frequencies/time slots to competing/confederating providers
Testbeds at Different Scale• Campus-scale
– Departmental WLAN service providers with overlapping coverage out of doors
• Regional-scale– Possible collaborations with AT&T Wireless
(NTTDoCoMo), PBMS, Sprint?
Presentation Outline• Motivation• Project SAHARA• Initial Investigations• Testbeds• Summary and Conclusions
Summary• Congestion Pricing, Clearinghouse, Service
Composition first attempts at service architecture components
• Next steps– Generalization to multiple service providers– Introduction of market-based mechanisms: congestion
pricing, auctions– Composition across confederated service providers– Trust management infrastructure– Understand peer-to-peer confederation formation vs.
hierarchical overlay brokering
Conclusions• Support for multiple service providers
needed to be retrofitted to original Internet architecture
• Telephony architecture better developed model of multiple service providers & peering, but with longer-lived agreements, fewer providers
• Need for support in a more dynamic environment, with larger numbers of service providers and/or service instances