tcp performance

27
TCP Performance Phil Cayton CSE 581 01/12/02

Upload: jane-mcdowell

Post on 01-Jan-2016

29 views

Category:

Documents


2 download

DESCRIPTION

TCP Performance. Phil Cayton CSE 581 01/12/02. Papers read for this discussion. TCP Behavior of a Busy Internet Server: Analysis and Improvements H. Balakrishnan, V. Padmanabhan, S. Seshan, M. Stemm, R. Katz An Integrated Congestion Management Architecture of Internet Hosts - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: TCP Performance

TCP Performance

Phil CaytonCSE 58101/12/02

Page 2: TCP Performance

Papers read for this discussion

TCP Behavior of a Busy Internet Server: Analysis and Improvements H. Balakrishnan, V. Padmanabhan, S. Seshan, M.

Stemm, R. KatzAn Integrated Congestion Management Architecture of Internet Hosts H. Balakrishnan, H. Rahul, S. Seshan

Endpoint Admission Control: Architectural Issues and Performance L. Breslau, E. Knightly, S. Shenker, I. Stoica, H.

ZhangTCP Congestion Control with a Misbehaving Receiver S. Savage, N. Cardwell, D. Wetherall, T. Anderson

Page 3: TCP Performance

Agenda

Overview of Loss Recovery and Congestion Control BehaviorIssues with TCP and Current Trends & Techniques to solve themAttacks against & Defenses for TCP Vulnerabilities Integrated Congestion ManagementEndpoint Admission ControlSummaryDiscussion

Page 4: TCP Performance

Changing Nature of Net Traffic

Historically – single, long-running xfers Telnet, FTP, etc

Currently – multiple, concurrent, short-lived transfers and use of transports that do not adapt to congestion Do not give TCP time to adapt to network

congestion Receiver modifications exacerbate problem by

circumventing congestion control mechanisms. Can hog bandwidth or commit DOS attacks

PROBLEM – Current trends could affect long term stability of internet

Page 5: TCP Performance

AIMD

Slow Start Start the congestion window (CW) at the size of a

single segment & send it. If segment acked before timeout add one segment to the CW size up to MSS.

Congestion Avoidance Once CW reaches threshold only add a new segment

for each estimated segment RTT.

Multiplicative Decrease Reduce CW size by half for each retransmit

Page 6: TCP Performance

Problems with Current TCP Congestion Control

Loss Recovery Techniques not effective in dealing with packet lossesDefault Socket Buffer Size to small Receiver window becomes a bottleneck

Parallel connections less responsive to congestion-induced lossesAck-Compression can artificially increase perceived queue-length

Page 7: TCP Performance

Solving Problems and Improving TCP Performance

Integrated Congestion Control/Loss Recovery Apps can use multiple TCP connections Single integrated CW for the set of TCP

connections Eliminates slow-start for new connections Decreases overall effect of congestion as all

connections are informed of and react to the congestion

Data-driven loss recovery integrated across set of TCP connections Connections know if ‘lost’ segments arrive on other

connections and do not need to timeout.

Page 8: TCP Performance

Solving Problems and Improving TCP Performance

Increase default socket buffer size Increases maximum receiver

advertised window on connection est. Allows for higher potential performance

Calculate Ack-Compression factor If significant ack-compression, slow-

data rate to limit danger of packet loss

Page 9: TCP Performance

TCP Vulnerabilities and Misbehaving ReceiversAck Division Receiver can acknowledge segments

multiple times (up to #bytes acks) Leads Sender to grow CW faster than

normal. Bunch of acks

Burst 1 RTT later

Page 10: TCP Performance

TCP Vulnerabilities and Misbehaving Receivers

Solution to Ack Division1. Modify congestion control to

guarantee segment-level granularity2. Only increment MSS when a valid

Ack arrives for the entire segment.

Page 11: TCP Performance

TCP Vulnerabilities and Misbehaving Receivers Duplicate Ack Spoofing Receiver sends multiple

acks/sequence # No way to tell what segment is being

acked Causes sender to enter fast-recovery

mode and increase MSSBurst of dup acks Sender

enters Fast Recovery and bursts 1 RTT later

Page 12: TCP Performance

TCP Vulnerabilities and Misbehaving Receivers

Solution to Duplicate Ack Spoofing Add new fields to TCP headers.

“nonce & nonce-reply” – random values sent with segments and replies. Only increment congestion Window for replies to previously unacked packets (determined by nonce/reply)

Oops…requires us to modify servers & clients Server maintains count of un-acked

segments. Only incr cwnd while count > 0

Page 13: TCP Performance

TCP Vulnerabilities and Misbehaving ReceiversOptimistic Acking Send acks for segments not yet

received Dec perceived RTT, affecting CW

growth. Segment acks

Segs arrive

Page 14: TCP Performance

TCP Vulnerabilities and Misbehaving Receivers

Solution to Optimistic Acking Again use nonce/nonce-reply as spoofer

cannot guess random-nonce values Oops… does not take into account cumulative

losses Cumulative Nonce

Oops… still requires modification of TCP “slightly” random sequence sizes

Spoofer unable to correctly anticipate segment boundaries and incorrect sequence numbers can be ignored.

Page 15: TCP Performance

Integrated Congestion Management

TCP-Friendly approach for end-system Congestion management that Enables efficient multiplexing of concurrent

flows Enables apps/xports to adapt to congestion Ensures proper and stable congestion behavior Delivers trusted intermediary between flows

for bandwidth management Provides per-host/per-domain statistics

(bandwidth, loss rates, RTT)

Page 16: TCP Performance

Integrated Congestion Management

Goals Ensure stable net behavior Enable shared state learning for app adaptation

Guiding Principles Put app in control

App decides what to transmit App decides how to apportion allocated bandwidth

Accommodate traffic heterogeneity TCP bulk xfers, short xactions, RT-flows at various rates,

layered streams, new apps Accommodate application heterogeneity (Syncronous

or Asynchronous) Learn from the application

Page 17: TCP Performance

Integrated Congestion Management

CM Functions Query path status Schedule data transmissions Update variables on congestion

CM Algorithms Rate-based AIMD control Loss-resilient, light-weight feedback protocol Exponential aging when feedback infrequent Scheduler to apportion bandwidth between

flows

Page 18: TCP Performance

Integrated Congestion Management

Web traces for web-like workload with 4

concurrent connections using TCP-Reno

Same workload with TCP/CM

Note high-variance

Note extremely low-variance

Page 19: TCP Performance

Integrated Congestion Management

More efficient if sender and receiver use CM, but not requiredCM improves Reliability & consistency to make apps better netizensCM Ensures proper & stable congestion behavior

Page 20: TCP Performance

Endpoint Admission Control

Goal of AC – Provide QOS to soft-RT flowsTraditional Approaches Integrated Services: per-flow router based AC

Flows must request service from the network. Acceptance depends on the level of available resources Limited scalability (routers must keep per-flow state & process per-flow reservation

messages) Differentiated Services: routers use priority/buffering mechanisms

based on DS field in packet headers No per-flow admission control or signaling Routers do not maintain per-flow state Good scalability No AC – QOS degrades if resources are limited

Goal –IntServe QOS & DiffServ scalability while maintaining compatibility w/ Best Effort traffic

Page 21: TCP Performance

Endpoint Admission Control

Router Scheduling Mechanisms FIFO

no flow admitted if the load while probing is greater than the capacity (no stolen bandwidth)

Fair Queuing Isolate flows to give each a fair share. Flow acceptance could

impair service to others. Possibility of lower than optimal utilization

Coexisting w/ Best Effort Traffic Don’t “borrow” bandwidth from best-effort traffic Don’t allow best-effort traffic to preempt AC traffic

Multiple Levels of Service All probes (regardless of priority) at same level Probes at different level than any data traffic to avoid “stealing”

service between levels

Page 22: TCP Performance

Endpoint Admission Control

Endpoint Probing Algorithms Acceptance Threshold

Too low threshold leads to higher blocking All AC flows should adopt uniform acceptance threshold

Accuracy In-Band Probing – probe packets same priority as data

(shorter set-up times, no starvation) Out-of-Band Probing – probe packets lower priority than

data packets (no data packet loss) Thrashing

Accepted flow levels low, but high probe-traffic prevents further admission

Use slow-start probing

Page 23: TCP Performance

Endpoint Admission Control

Simulations Probing: slow-start, early reject and

simple Options: in-band probing, out-of-band

probing, signal congestion with packet drops, signal congestion with congestion marks

Page 24: TCP Performance

Endpoint Admission Control

Thrashing - Loss rate for simple & early reject probing algorithms for in-band probing design substantially worse than that of the Intserv “MBAC” benchmark

Slow start based algorithm in contrast is much closer to MBAC OOB dropping similar loss-load frontiers as MBAC

Robustness In-band dropping has highest, OOB marking lowest dropping rates.

Heterogeneous Thresholds Lower thresholds for higher QOS->higher block rates->lower QOS Uniform thresholds seem to yield higher overall QOS

Heterogeneous Traffic Traditional admission control discriminates against bigger flows. Edge admission control less discriminate and can admit too much

Multi-hop Drop rates higher for longer/multi-hops but Admission accuracy unaffected

Page 25: TCP Performance

Endpoint Admission Control

Cons Substantial setup delay QOS unpredictable across settings No mechanism to enforce uniform admission

thresholds Users could forge admission fields and be granted

higher QOS without being subject to AC

Pros Scalable scheduling for soft-RT applications Provides multiple QOS levels along with Best-effort Early tests show good TCP-friendliness

Page 26: TCP Performance

SummaryTCP not designed for prevalent traffic patternsTCP designed for cooperative environment and contains vulnerabilitiesModest (server side) changes can make TCP more robust for current traffic and less vulnerable to spoofs and DOS attacksMore substantial (sender and reciever) changes can further add congestion management and improve overall stability and security

Page 27: TCP Performance

Discussion