ch 6-the transport layer

62
Ch - 6 The Transport layer

Upload: rushalin7

Post on 16-Nov-2015

9 views

Category:

Documents


0 download

DESCRIPTION

Detail discussion of the transport layer protocols

TRANSCRIPT

Ch - 6 The Transport layer

Ch - 6 The Transport layer

Introduction to transport layerGoal-effective, reliable and cost effective data transmission servicesServices: Upper Layer ServicesTransport Service PrimitivesBerkeley SocketsExample of Socket Programming:Internet File ServerPacketizingConnection controlTransport addressing.Flow control.

Why two distinct layers, one layer is not adequate?Transport layer runs on users machine which are operated by carrier (atleast in WAN) network layer works on routers.So if the packet loss, inadequate services or router crash, happens then user equipment has the data to transmit the packets. Thus transport layer ensures the successful end to end packet deliveryUser doesnt have any control over network services and protocols therefore there should be some layer above network layer to increase the over all QoS.Incase of failures in network the transport layer can set up a new network to the transport entity to insure end to end transmissionThus transport layer provides reliable service on top of unreliable network layer

Congestion control in Layer 4Congestion control is done by both data link layer and network layer But transport layer provides the congestion control for better QoS than network layerTypical QoS parameters areConnection establishment delayThroughput ProtectionTransit delayResidual errorResiliencePriorityTransport layer takes care to process to process packet delivery unlike network layer that takes care of router to router packet delivery

Addressing in Transport layerData link layer needs MAC address, network layer needs IP address similarly transport layer needs Port number to chose among multiple process running on the destination host.Port numbers are 16 bit integers between 0 to 65535The client program defines itself with a port no. which is chosen randomly, is called ephemeral port number.Port nos. for servers also but no decided randomlyUniverse port no for servers is called Well known port no.So every client knows the well known port no corresponding to the servers.

Addressing in Transport layerNetwork layers IP address defines the particular host in millions of hosts and after detecting the host the, which supports many processes, the port no defines the one of the process on the selscted host.Port address ranges:They are given by IANA-International Assigned Number AuthorityWell known ports: 0 to 1023, controlled by IANARegistered Ports: 1024 to 49151 not controlled by IANA but registered with it to avoid duplicationDynamic Ports: 49152 to 63535 are neither controlled nor registered by IANA. They are used by any process.

Addressing in Transport layerIt is a set of system calls or procedures for communication.Socket acts as an end pointTwo processes of two end terminals can communicate if the have sockets at each end.Socket Addressing:It is the combination of IP address and port number Two types of socket: client sockets and server socketsThese contains 4 pieces which goes into the IP headerSo IP header contains the IP address while TCP/UDP header has port numbers

Transport Service PrimitivesPrimitives means the codes/programs stored in OS kernels, or in library packages or stored in network interface cardsFor eg. Server executes LISTEN packet, and when client wants to talk to server it executes CONNECT primitive.

Segments-Packets- Frames nestingMessage exchanged between two transport entities is called Segment(TPDU-Transport Protocol Data Unit, in some older protocol)Segments are engulfed in packet(for in network layer) which is engulfed in the frame

Elements of transport protocolTransport layer responsibilities are same as the data link layer like error control, sequencing and flow control.So what is difference between these two layers??At the layer 2 two routers communicate with each other on a single link so in here the flow, error control etc. is taken care on single link where as in transport layer the physical channel is replaced by entire subnetThe flow control, error control etc. is taken care on the subnet level

Environment of the data link layer. Environment of the transport layer.

Difference between Data link layer& Transport layerData link layerCommunication is through physical channelDestination router is not important to specifySimple connection establishmentsNo storage capacityNo additional delaysTransport layerCommunication through subnetExplicit destination addressingComplicated initial connection establishmentStorage is done on users device and few in subnetsDelays introduced due to storage

Connection oriented ServicesFor connection between two hosts, Host1 and Host2 firstly the connection request and reply acknowledgement from one host to another is exchanged.Each connection has a unique sequence no. to avoid duplicationThe sequence no. ensures the sender cannot create more than one connection if no. repeats then duplication resultsSimilarly each acknowledge has an acknowledgment no.Receiver has to keep records of the sequence and acknowledgment nos. for each remote host for specific time.

Connection oriented ServicesIn transport layer the connection establishment is comparatively complex processProblems:If ACKs are received on time then retransmissions of each packet results In subnets the datagrams take different routes which results in loss of datagrams if paths has congestions or failures.Same connections reestablishes due to packet duplicationRemedy: Techniques for restricting packet lifetimeRestricted network design.Putting a hop counter in each packet.Time stamping each packet.

Three way Handshaking TechniqueNormal OperationHost 1 chooses sequence no.x and sends the connection request segment containing it to host2Host 2 replies with the connection accept segment to acknowledge x nd to announce its own Seq no.Host 1 sends CK to host2 and sends the first data segment.

Three way Handshaking TechniqueAbnormal OperationThe 1st segment is delayed duplicate CR from old connection and hoast1 doesnt know about it.Host 2 Rxes this segment and sends ACK to host2 as connection acceptedBut host1 is not trying for any connection thus it sends REJECT along ACK=yHost 2 releases as its fooled by delayed duplicate=> aborts the connection.

Three way Handshaking TechniqueAbnormal Operation: Duplicate CR and Duplicate ACKWorst case: floating delayed duplicate CR and ACKs both.Host 2 Rxes this segment and sends ACK with its seq no. y But host1 is not trying for any connection thus it sends REJECT along ACK=yHost 2 releases as its fooled by delayed duplicate=> aborts the connection.

Three way Handshaking TechniqueConnection ReleaseIn this if one hosts releases the connection the other host can keep sending the segmentsHost1 sends a msg. to release connection to host2, to which host2 confirms and sends the ACKHere still host2 can continue sending data then once sending is over it sends connection release msg. to host1Host1 ACKs to this msg. and finally the connection gets released

Two Army Problem

Example for handshakeBlue and white are two army fighting against each otherWhite solders are more than any one of blue1&blue2 army.Blue army 1&2 individually has less no. of members(impossible to win against white) but if combined they have higher no of soldiers(possibly win over white army)So blue 1&2 needs synchronization for combined attack.

The two-army problem

Formulating algorithm so that blue army winsProtocol1:Two way handshake signalChief of blue1 army sends a message to the chief of blue2 army proposing the attack on 1st January is it ok?The messenger reaches to the chief of blue army2 to which he agrees and sends a reply Yess(ACK).This process is called two way handshakeWill the attack take place?Answer is probably not because the chief of blue2 army doesnt know whether his reply has reached the blue1 army successfully

Formulating algorithm so that blue army winsProtocol2:Three way handshake signalImproving two way algo.by three way algorithmAssuming that non of the messages are lost and blue2 army will lso get an acknowledgement from blue1 armyBut now the chief of blue1 army will hesitate, because he doesnt know if the last message he has sent has got through or not.So we can make four way handshake.But it also does not help, because in every protocol the uncertainity after the last handshake message always remains.Infact there is no protocol exits that works.

Flow Control and BufferingSliding window protocol is ment for flow control but it is impractical because sending ACKs after each small packets increases the congestionSo including buffer is better, it buffers the segments and sends acknowledgements as soon as they taken inside for processing by the Rxer.No individual buffer for individual connection instead a pool of buffers for all connections(not in case of high bandwidth apps.)A general way to manage dynamic buffer allocation is to decouple the buffering from the acknowledgments.

Flow Control and BufferingDynamic buffer management in a variable- sized window:Initially, the sender requests a certain no of buffers, based on its perceived needs.The receiver grants then as many buffers it can affordEvery time the sender terminates a segment, it must decrement its allocation and stopping altogether when allocations reaches to zero.The receiver then separately piggybacks both ACKs and buffer allocation onto reverse traffic.The senders window size could be dynamically adjusted not only by the availability of the buffer at Rxer.side but also by the capacity and traffic of the subnet.The bigger the subnet capacity->lighter the traffic-> larger the senders window-> lager the buffer size.

Flow Control and BufferingChained fixed-size buffers. (b) Chained variable-sized buffers. (c) One large circular buffer per connection.

Error Control and Crash recoveryCrash results in loss of the dataIn case of router crash the two entities must exchange information about which segments received and which are lost then resending the lost onesIf network layer provides connection oriented services then the loss of virtual circuit connection is only rebuiltThe receiver first sends an ACK and then perform the write to the output stream(about the transmitted segments)But crash occurs in middle of the transmission. Then it better to reverse the order of sending ACKs and performing write optionBut if ACKs are lost duplicate segments will be transmitted. So no matter how the sender and receiver are programmed, there are always situations where the protocol fails to recover properly

Error Control and Crash recoveryRecovery from IMP and Host crashes:Txport entity should exchange information after the crash about which segments Rxed and which are crashed Rxer must send a broadcast information about its crash to all other neighboring nodesTransport entity first send write then sends ACKsRxer can be programmed in one of the following two ways: ACK first or write firstSender can be programmed in one of the following ways:Always transmit the last segmentNever transmit the last segmentRe-transmit only in state S0Re-transmit only in state SI

Error Control and Crash recoveryDifferent combinations of client and server strategy

The Internet Transport ProtocolsThe internet has two main protocols in the transport layer.One of them is connection oriented and the other one is connectionless services.TCP(Transmission Control Protocol) is connection oriented protocolUDP(Users Datagram Protocol) is a connectionless protocolUDP is similar to IP with an additional short header

Transmission Control ProtocolIntroduction to TCPThe TCP service modelThe TCP protocolThe TCP segment headerTCP connection establishmentTCP connection releaseTCP Connection ManagementTCP windows managementTCP Congestion ControlTCP Timer management

TCP- IntroductionTCP (Transmission Control Protocol) was specifically designed to provide a reliable end-to-end byte stream over an unreliable internetwork.An internetwork differs from a single network because different parts may have quite different topologies, bandwidth, delays, packet sizes, and other parameters. TCP was designed to dynamically adapt to properties of the internetwork and to be robust in the face of many kinds of failures.

TCP- IntroductionSince connection oriented service it is reliable.Services-stream data transmission, stability, flow control, full duplex operations and multiplexingDue to reliable channel the data chopping does not take place simply the segments are formed and passed them to IP for deliveryBlocks not ACKed in specified time period(timer mechanism) are only retransmittedReliability in TCP deals with lost, delayed or duplicate or misread packets.Full duplex operation- TCP can send and receive at same time.Multiplexing-numerous simultaneous upper layer conversations can be multiplexed over a single connections.

TCP-Service ModelFor obtaining the TCP services both sender and receiver necessarily has to create end points called sockets, where in each socket has a socket number and a socket address.A Port is a TCP name in order to obtain the TCP service, it is necessary to establish a socket between the sending and receiving machine.

Socket calls

TCP-Service ModelA TCP connection is byte stream, not a message stream. Message boundaries are not preserved end to end.For example, if the sending process does four 512-byte writes to a TCP stream, these data may be delivered to the receiving process as four 512-byte chunks, or two 1024-byte chunks, or one 2048-byte chunk, or some other way.When an application passes data to TCP, TCP may send it immediately or buffer it (in order to collect a larger amount to send at once), at its discretion.

TCP-Service ModelUrgent DataIts is a flag in TCPIf there is a data to sent with some control information the sending entity puts it with an URGENT flag.Then TCP stops accumulating data and transmits everything it has for the connection immediatelyWhen the data reaches Rxer. the receiving application is interrupted application is interrupted and the urgent data stream is processed by it.

TCP-Service ModelWell known ports:Port no 1024-well known ports reserved for serversAll TCP connections are point-point(has endpoints) and full duplex (means bidirectional).No multicasting and broad castingPush flag and buffering:When message boundaries are not preserved the TCP may send it immediately or may collect the data for some time and sent it at once(buffering)For immediate data transmission TCP can use PUSH flag which forces TCP send the data without any delays

TCP-Service ModelSome assigned ports.

PortProtocolUse21FTPFile transfer23TelnetRemote login25SMTPE-mail69TFTPTrivial File Transfer Protocol79FingerLookup info about a user80HTTPWorld Wide Web110POP-3Remote e-mail access119NNTPUSENET news

The TCP- ProtocolEvery byte on TCP connection has 32 bit sequence no.Segments:Data exchange in the form of 20 byte long segments followed by zero or more dataSegment Size:Each segment with TCP header has 65535 byte IP payloadEach network has MTU(Maximum Transfer Unit), each segment must fit MTU which is few thousands of bytes, it defines the maximum upper limit to segment sizeFragmentation: If segment is too large it is divided in to fragmentsEach new fragment get a new IP header so fragmentation increases the payloadTimer:TCPs basic protocol is sliding window protocol where in the retransmissions are only done if time delays are violated.

The TCP- ProtocolPossible ProblemsSegments as whole or part of it may can reach the destinationSegments may reach out of orderDelays in segment transmissions are not fix so unnecessary retransmissions may occurDue to seq. no proper sequence has to be maintained while receiving Possibility of congestion and broken network along the path

The TCP Segment Header20 bytes fixed formatAfter 65535-20-20=65495 data bytes may followIn this 20 corresponds to IP header and 20 corresponds to TCP headerFor ACKs and control messages only TCP header is sent

The TCP Segment Header

The TCP Segment HeaderAcknowledge no.: provides 32 bit long ACK no given to each ACK exchangedHeader Length/Offset: tells about the total length of the header excluding the data payload varies. Multiple of 4 to these 4 bits will give the value.Reserved: this field is currently unused thus kept for future (all 0s).Control bits/flag bits:CWR-Congestion window reduced, when set means the congestion is removed ECE-when ECE=1 tells the ECN(Explicit Congestion Notification)-Echo to inform the sender to slow down as congestion is has taken place.URG-Urgent Pointer-1=incoming urgent data packetACK-1=valid ACKPSH-Push Function, 1=the sender should sent this packet with least delayRST-Reset the connection, aborts the connection and buffers are made emptySynchronization(SYN)-when set means sender is trying to sync the sequence no.FIN: when set it tells the sender and receiver has reached to the end of the connection.

The TCP Segment HeaderWindow size: for flow control for data transmission size, this field tells how much data the receiver is willing to accept, maximum size is limited to 65535.Checksum: CRC or Checksum bit used only for header checksumUrgent Pointer: this field tells the receiver when is the last byte of urgent data in the segment ends.Options: for additional data to be mentioned if any by the sender in this fieldPadding: many info in the header so to differentiate the header is padded with extra series of zeros are appended

TCP Connection ManagementTo establish a connection, one side, say a server, passively waits for an incoming connection by executing LISTEN and ACCEPT primitivesThe other side, say a client, executes a CONNECT primitive, specifying the IP address and port to which it wants to connect, and the max TCP segment size it is willing to accept The CONNECT primitive sends a TCP segment with the SYN bit = 1 and ACK = 0 and waits for a response

TCP Connection Management When this segment arrives at the destination, the TCP entity there checks to see if there is a process that has done a LISTEN on the port given in the Destination port field. If not, it sends a reply with the RST bit on to reject the connection. If some process is listening on the port, that process is given the incoming TCP segment. It can either accept or reject the connection. If it accepts, an acknowledgment segment is sent back.

TCP- Congestion ControlWhen the load offered to any networks is more than it can handle, congestion builds up. The Internet is no exception. Algorithms have been developed over the past decade to deal with congestion. Although the network layer also tries to manage congestion, most of the heavy lifting is done by TCP because the real solution to congestion is to slow down the data rate. In theory congestion can be dealt with by employing a principle borrowed from physics: the law of conservation of packets. The idea is not to inject a new packet into the network until an old one leaves (i.e. is delivered). TCP attempts to achieve this goal by dynamically manipulating the Window size.

TCP- Congestion Control

(a) A fast network feeding a low capacity receiver (b) A slow network feeding a high capacity receiver

TCP Termination Protocol

The states used in the TCP connection management finite state machine.

TCP Sliding WindowWindow management in TCP

TCP- Timer ManagementTCP uses multiple timers (at least conceptually) to do its work. The most important of these is the retransmission timer. When a segment is sent, a retransmission timer is started. If the segment is acknowledged before the timer expires, the timer is stopped. If, on the other hand, the timer goes off before the acknowledgment comes in the segment is retransmitted (and the timer started again).The question that arises is: How long should the timeout interval be?This problem is much more difficult in the Internet transport layer than in the generic data link protocols, where the delay is very predictable. The solution is to use a highly dynamic statistical algorithm that constantly adjusts the timeout interval based on continuous measurements of network performance. This algorithm was proposed by Jacobson in 1988.

TCP- Timer ManagementJacobsons AlgorithmThere is a timer called Round Trip Time(RTT) for each connection of TCP and is variable per connection.When segment is sent the timer startsIf ACK fails to reach the source before the timer expires then retransmissions do occureIf ACK reaches before timer expires the TCP measures the time taken by the ACK and adjusts the RTT to a new valueRTT=RTT+(1- )M is called the smoothing factor. = 7/8, M is time taken by successful ACK to reach sourceEven if good value of RTT is given, it is not easy to choose the timeout.Jacobson Proposed n new smoothing factor D(deviation) which is given by D= D+(1- ) I RTT-M ITime out is calculated as Timeout=RTT+4D

UDP-User Datagram ProtocolUDP is a connectionless, unreliable Transport level service protocol. It is primarily used for protocols that require a broadcast capability.Many client-server applications that have 1 request and 1 response use UDP rather than go to the trouble of establishing and later releasing a connection.It provides no packet sequencing, may lose packets, and does not check for duplicates.It is used by applications that do not need a reliable transport service. Application data is encapsulated in a UDP header which in turn is encapsulated in an IP header.UDP distinguishes different applications by port number which allows multiple applications running on a given computer to send /receive datagrams independently of one another.

UDP-User Datagram ProtocolConnectionless:no handshaking between UDP sender, receivereach UDP segment handled independently of othersWhy is there a UDP?no connection establishment (which can add delay)simple: no connection state at sender, receiversmall segment headerno congestion control: UDP can blast away as fast as desiredOften used for streaming multimedia appsless tolerantrate sensitiveOther UDP usesDNSSNMPReliable transfer over UDP: add reliability at application layerApplication-specific error recovery! (e.g, FTP based on UDP but with recovery)

UDP-HeaderA UDP segment consists of an 8-byte header followed by the data.UDP only provides TSAPs (ports) for applications to bind to. UDP does not provide reliable or ordered service. The checksum is optional.

The UDP header.

UDP-HeaderThe two ports serve the same function as they do in TCP: to identify the end points within the source and destination machines. The UDP length field includes the 8-byte header and the data.The UDP checksum is used to verify the size of header and data.Sender:treat segment contents as sequence of 16-bit integerschecksum: addition (1s complement sum) of segment contentssender puts checksum value into UDP checksum fieldReceiver:compute checksum of received segmentcheck if computed checksum equals checksum field value:NO - error detectedYES - no error detected. But maybe errors nonetheless? More later .

0 0 0 0 0 0 0 0 Protocol port no. UDP LengthSource IP AddressDestination IP Address

0 8 16 311.Pseudoheader is to ensure that the datagram has indeed reached the correct destination host and port.2. The padding of 0s and pseudoheader is only for the computation of checksum and not be transmitted.UDP pseudoheader

UDP- Well known ports

UDP OperationsConnectionless services :each datagram sent by UDP is n independent datagram even though they are coming from the same sourceThese datagrams are not numbered. No connection establishment or connection release is necessary, each datagram can follow different pathFlow Control & Error Control :UDP is simple and unreliable protocol. There is no flow control, hence the receiver can overflow with incoming messages.No mechanism for error control except for checksum in header.If error is detected by checksum the segment get discarded.Encapsulation & DecapsulationQueuing.

UDP-Encapsulation & Decapsultion

Queues in UDP

Queues in UDPClient requests for the port no. from the OS. Incoming and outgoing queues are created for each processOne port for each process thus it results in to only one queue per processClient sends the messages on its output line using output port addressUDP Removes the queue messages one by one by adding the UDP header and delivers them to IPIf queue over flows then OS tells Client to wait before sending next messagesWhile client receiving the messages, UDP checks if incoming port has queue or not, if yes then UDP sends received datagrams to the end of the queueIf incoming messages overflows then UDP simply discards the datagram and prepares to notify sender of port unavailability.In case of server queuing the port address is a well known port address rest all steps of queuing are the same

Applications of UDPIt is suitable for application that have following requirementsA simple response to simple request madeFlow control & Error control is not essentialBulk data is not be sentUDP is suitable for multicasting applicationsUDP used for management process like SNMP(Simple Network Management Protocol)UDP is used for RIP(Routing Information Protocol)

Comparison of TCP and UDPTCPFull featured protocolConnection oriented protocolData transmitted in streamsReliable transmissionsHigh overheadLOW transmission speedsRetransmissionsFlow & Error ControlUDPLess featured protocol.Connectionless protocolMessage based transmitted Unreliable transmissionsLow overheadsHigh transmission speedsNo retransmissions occurs No Flow & Error Control

ENDChapter 6