bc0055-fall drive assignments 2012

24
BCA Revamped PROGRAMME V th SEMESTER ASSIGNMENTS-01 Name : Register no. : Learning center : Learning center code : Course / Program : Semester : Subject code : Subject Title : Date of submission : Date of submission : Marks awarded : Average marks of both assignments Signature of center coordinator Signature of Evaluator Directorate of Distance Education Sikkim Manipal University

Upload: hakkempalakkal

Post on 16-Apr-2015

131 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: BC0055-Fall Drive Assignments 2012

BCA Revamped PROGRAMMEVth SEMESTERASSIGNMENTS-01

Name :

Register no. :

Learning center :

Learning center code :

Course / Program :

Semester :

Subject code :

Subject Title :

Date of submission :

Date of submission :

Marks awarded :

Average marks of both assignments

Signature of center coordinator Signature of Evaluator

Directorate of Distance EducationSikkim Manipal University

II Floor, Syndicate House, Manipal-576 104

Page 2: BC0055-Fall Drive Assignments 2012

1. Explain the architecture of the TCP/IP protocol suite.

Answer The TCP/IP Protocol Suite The TCP/IP protocol suite maps to a four-layer conceptual model known as the DARPA model, which was named after the U.S. government agency that initially developed TCP/IP. The four layers of the DARPA model are: Application, Transport, Internet, and Network Interface. Each layer in the DARPA model corresponds to one or more layers of the seven-layer OSI model. Figure 1.2 shows the architecture of the TCP/IP protocol suite. The TCP/IP protocol suite has two sets of protocols at the Internet layer: IPv4, also known as IP, is the Internet layer in common use today on private intranets and the Internet. IPv6 is the new Internet layer that will eventually replace the existing IPv4 Internet layer.

Network Interface Layer: The Network Interface Layer (also called the Network Access Layer) sends TCP/IP packets on the network medium and receives TCP/IP packets off the network medium. TCP/IP was designed to be independent of the network access method, frame format, and medium. Therefore, you can use TCP/IP to communicate across differing network types that use LAN technologies – such as Ethernet and 802.11 wireless LAN – and WAN technologies – such as Frame Relay and Asynchronous Transfer Mode (ATM). By being independent of any specific network technology, TCP/IP can be adapted to new technologies. The Network Interface layer of the DARPA model encompasses the Data Link and Physical layers of the OSI model. The Internet layer of the DARPA model does not take advantage of sequencing and acknowledgment services that might be present in the Data Link layer of the OSI model. The Internet layer assumes an unreliable Network Interface layer and that reliable communications through session establishment and the sequencing and acknowledgment of packets is the responsibility of either the Transport layer or the Application layer.

Internet Layer: The Internet layer responsibilities include addressing, packaging, and routing functions. The Internet layer is analogous to the Network layer of the OSI model. The core protocols for the IPv4 Internet layer consist of the following:

Page 3: BC0055-Fall Drive Assignments 2012

The Address Resolution Protocol (ARP) resolves the Internet layer address to a Network Interface layer address such as a hardware address. The Internet Protocol (IP) is a routable protocol that addresses, routes, fragments and reassembles packets. The Internet Control Message Protocol (ICMP) reports errors and other information to help you diagnose unsuccessful packet delivery. The Internet Group Management Protocol (IGMP) manages IP multicast groups.

Transport Layer: The Transport layer (also known as the Host-to-Host Transport layer) provides the Application layer with session and datagram communication services. The Transport layer encompasses the responsibilities of the OSI Transport layer. The core protocols of the Transport layer are TCP and UDP. TCP provides a one-to-one, connection-oriented, reliable communications service. TCP establishes connections, sequences and acknowledges packets sent, and recovers packets lost during transmission. In contrast to TCP, UDP provides a one-to-one or one-to-many, connectionless, unreliable communications service. UDP is used when the amount of data to be transferred is small (such as the data that would fit into a single packet), when an application developer does not want the overhead associated with TCP connections, or when the applications or upper-layer protocols provide reliable delivery. TCP and UDP operate over both IPv4 and IPv6 Internet layers.

Application Layer: The Application layer allows applications to access the services of the other layers, and it defines the protocols that applications use to exchange data. The Application layer contains many protocols, and more are always being developed. The most widely known Application layer protocols help users exchange information:

HTTP: The Hypertext Transfer Protocol (HTTP) transfers files that make up pages on the World Wide Web. FTP: The File Transfer Protocol (FTP) transfers individual files, typically for an interactive user session.SMTP: The Simple Mail Transfer Protocol (SMTP) transfers mail messages and attachments. Additionally, the following Application layer protocols help you use and manage TCP/IP networks: DNS: The Domain Name System (DNS) protocol resolves a host name, such a www.cisco.com, to an IP address and copies name information between DNS servers. RIP: The Routing Information Protocol (RIP) is a protocol that routers use to exchange routing information on an IP network.SNMP: The Simple Network Management Protocol (SNMP) collects and exchanges network management information between a network management console and network devices such as routers, bridges, and servers.

Windows Sockets and NetBIOS are examples of Application layer interfaces for TCP/IP applications.

2. What do you mean by RFC? Explain its significance

Answer

Page 4: BC0055-Fall Drive Assignments 2012

Requests for Comments (RFCs)

The standards for TCP/IP are published in a series of documents called Requests for Comments (RFCs). RFCs describe the internal workings of the Internet. TCP/IP standards are always published as RFCs, although not all RFCs specify standards. Some RFCs provide informational, experimental, or historical information only. An RFC begins as an Internet draft, which is typically developed by one or more authors in an IETF working group. An IETF working group is a group of individuals that has a specific charter for an area of technology in the TCP/IP protocol suite.

Page 5: BC0055-Fall Drive Assignments 2012

3. Discuss the characteristics of AAL5.

AnswerCharacteristics of AAL5, which is used for TCP/IP:

• Message Mode and Streaming Mode Assured Delivery

• Non-Assured Delivery (used by TCP/IP)

• Blocking and Segmentation of Data

• Multipoint Operation

The AAL type is known by the VC endpoints through the cell setup mechanism and is not carried in the ATM cell header. For PVCs, the AAL type is administratively configured at the endpoints when the connection (circuit) is set up. For SVCs, the AAL type is communicated along the VC path through Q.93B as part of call setup establishment and the endpoints use the signaled information for configuration. ATM switches generally do not care about the AAL type of VCs. The AAL5 format specifies a packet format with a maximum size of 64 KB - 1 byte of user data.

The primitives, which the higher layer protocol has to use in order to interface with the AAL layer (at the AAL service access point, or SAP), are rigorously defined. When a high-layer protocol sends data, that data is processed first by the adaptation layer, then by the ATM layer, and then the physical layer takes over to send the data to the ATM network. The cells are transported by the network and then received on the other side first by the physical layer, then processed by the ATM layer, and then by the receiving AAL. When all this is complete, the information (data) is passed to the receiving higher layer protocol. The total function performed by the ATM network has been the non-assured transport (it might have lost some) of information from one side to the other. Looked at from a traditional data processing viewpoint, all the ATM network has done is to replace a physical link connection with another kind of physical connection. All the higher layer network functions must still be performed (for example, IEEE 802.2).

Page 6: BC0055-Fall Drive Assignments 2012

4. What is fragmentation? Explain its significations.

Answer When an IP datagram travels from one host to another, it can pass through different physical networks. Each physical network has a maximum frame size. This is called the maximum transmission unit (MTU). It limits the length of a datagram that can be placed in one physical frame. IP implements a process to fragment datagram exceeding the MTU. The process creates a set of datagram within the maximum size. The receiving host reassembles the original datagram. IP requires that each link support a minimum MTU of 68 octets. This is the sum of the maximum IP header length (60 octets) and the minimum possible length of data in a non-final fragment (8 octets). If any network provides a lower value than this, fragmentation and reassembly must be implemented in the network interface layer. This must be transparent to IP. IP implementations are not required to handle unfragmented datagram larger than 576 bytes. In practice, most implementations will accommodate larger values. An unfragmented datagram has an all-zero fragmentation information field. That is, the more fragments flag bit is zero and the fragment offset is zero.

The following steps fragment the datagram:

1. The DF flag bit is checked to see if fragmentation is allowed. If the bit is set, the datagram will be discarded and an ICMP error returned to the originator.

2. Based on the MTU value, the data field is split into two or more parts. All newly created data portions must have a length that is a multiple of 8 octets, with the exception of the last data portion.

3. Each data portion is placed in an IP datagram. The headers of these datagram are minor modifications of the original: The more fragments flag bit is set in all fragments except the last. The fragment offset field in each is set to the location this data portion occupied in the original datagram, relative to the beginning of the original unfragmented datagram. The offset is measured in 8-octet units. If options were included in the original datagram, the high order bit of the option type byte determines if this information is copied to all fragment datagram or only the first datagram. For example, source route options are copied in all fragments. The header length field of the new datagram is set. – The total length field of the new datagram is set. – The header checksum field is re-calculated.

4. Each of these fragmented datagram is now forwarded as a normal IP datagram. IP handles each fragment independently. The fragments can traverse different routers to the intended destination. They can be subject to further fragmentation if they pass through networks specifying a smaller MTU. At the destination host, the data is reassembled into the original datagram. The identification field set by the sending host is used together with the source and destination IP addresses in the datagram. Fragmentation does not alter this field. In order to reassemble the fragments, the receiving host allocates a storage buffer when the first fragment arrives. The host also starts a timer. When subsequent fragments of the datagram

Page 7: BC0055-Fall Drive Assignments 2012

arrive, the data is copied into the buffer storage at the location indicated by the fragment offset field. When all fragments have arrived, the complete original unfragmented datagram is restored. Processing continues as for unfragmented datagrams. If the timer is exceeded and fragments remain outstanding, the datagram is discarded. The initial value of this timer is called the IP datagram time to live (TTL) value. It is implementation-dependent. Some implementations allow it to be configured. The netstat command can be used on some IP hosts to list the details of fragmentation.

Page 8: BC0055-Fall Drive Assignments 2012

5. Discuss the various steps in domain name resolution.

Answer

Domain Name Resolution The domain name resolution process can be summarized in the following steps:

1. A user program issues a request such as the gethostbyname () system call (this particular call asks for the IP address of a host by passing the host name) or the gethostname() system call (which asks for a host name of a host by passing the IP address).

2. The resolver formulates a query to the name server. (Full resolvers have a local name cache to consult first; stub revolvers do not.

3. The name server checks to see if the answer is in its local authoritative database or cache, and if so, returns it to the client. Otherwise, it queries other available name servers, starting down from the root of the DNS tree or as high up the tree as possible.

4. The user program is finally given a corresponding IP address (or host name, depending on the query) or an error if the query could not be answered. Normally, the program will not be given a list of all the name servers that have been consulted to process the query. Domain name resolution is a client/server process. The client function (called the resolver or name resolver) is transparent to the user and is called by an application to resolve symbolic high-level names into real IP addresses or vice versa. The name server (also called a domain name server) is the server application providing the translation between high-level machine names and the IP addresses. The query/reply messages can be transported by either UDP or TCP.

Page 9: BC0055-Fall Drive Assignments 2012

6. Explain the various steps in TCP congestion control.

Answer

TCP Congestion Control Algorithms

One big difference between TCP and UDP is the congestion control algorithm. The TCP congestion algorithm prevents a sender from overrunning the capacity of the network (for example, slower WAN links). TCP can adapt the sender's rate to network capacity and attempt to avoid potential congestion situations. In order to understand the difference between TCP and UDP, understanding basic TCP congestion control algorithms is very helpful. Several congestion control enhancements have been added and suggested to TCP over the years. This is still an active and ongoing research area, but modern implementations of TCP contain four intertwined algorithms as basic Internet standards:

Slow start

Congestion avoidance

Fast retransmit

Fast recovery

Slow Start: Old implementations of TCP start a connection with the sender injecting multiple segments into the network, up to the window size advertised by the receiver. Although this is OK when the two hosts are on the same LAN, if there are routers and slower links between the sender and the receiver, problems can arise. Some intermediate routers cannot handle it, packets get dropped, and retransmission results and performance is degraded. The algorithm to avoid this is called slow start. It operates by observing that the rate at which new packets should be injected into the network is the rate at which the acknowledgments are returned by the other end. Slow start adds another window to the sender's TCP: the congestion window, called cwnd. When a new connection is established with a host on another network, the congestion window is initialized to one segment (for example, the segment size announced by the other end, or the default, typically 536 or 512). Each time an ACK is received, the congestion window is increased by one segment. The sender can transmit the lower value of the congestion window or the advertised window. The congestion window is flow control imposed by the sender, while the advertised window is flow control imposed by the receiver. The former is based on the sender's assessment of perceived network congestion; the latter is related to the amount of available buffer space at the receiver for this connection. The sender starts by transmitting one segment and waiting for its ACK. When that ACK is received, the congestion window is incremented from one to two, and two segments can be sent. When each of those two segments is acknowledged, the congestion window is increased to four. This provides an exponential growth, although it is not exactly exponential, because the receiver might delay its ACKs, typically sending one ACK for every two segments that it receives.At some point, the capacity of the IP network (for example, slower WAN links) can be reached, and an intermediate router will start discarding packets. This tells the sender that its congestion window has gotten too large. See Figure for an overview of slow start in action.

Page 10: BC0055-Fall Drive Assignments 2012

Congestion Avoidance: The assumption of the algorithm is that packet loss caused by damage is very small (much less than 1%). Therefore, the loss of a packet signals congestion somewhere in the network between the source and destination. There are two indications of packet loss:

A timeout occurs

Duplicate ACKs are received.

Congestion avoidance and slow start are independent algorithms with different objectives. But when congestion occurs, TCP must slow down its transmission rate of packets into the network and invoke slow start to get things going again. In practice, they are implemented together. Congestion avoidance and slow start require that two variables be maintained for each connection:

A congestion window, cwnd

A slow start threshold size, ssthresh

The combined algorithm operates as follows:

1. Initialization for a given connection sets cwnd to one segment and ssthresh to 65535 bytes.

2. The TCP output routine never sends more than the lower value of cwnd or the receiver's advertised window.

3. When congestion occurs (timeout or duplicate ACK), one-half of the current window size is saved in ssthresh. Additionally, if the congestion is indicated by a timeout, cwnd is set to one segment. 4. When new data is acknowledged by the other end, increase cwnd, but the way it increases depends on whether TCP is performing slow start or congestion avoidance. If cwnd is less than or equal to ssthresh, TCP is in slow start; otherwise, TCP is performing congestion avoidance.

Slow start continues until TCP is halfway to where it was when congestion occurred (since it recorded half of the window size that caused the problem in step 2), and then congestion avoidance takes over. Slow start has cwnd begin at one segment, and incremented by one segment every time an ACK is received. As mentioned earlier, this opens the window exponentially: send one segment, then two, then four, and so on. Congestion avoidance dictates that cwnd be incremented by segsize*segsize / cwnd each time an ACK is received, where segsize is the segment size and cwnd is maintained in bytes. This is a linear growth of cwnd, compared to slow start's exponential growth. The increase in cwnd should be at most one segment each round-trip time (regardless of how

Page 11: BC0055-Fall Drive Assignments 2012

many ACKs are received in that round-trip time), while slow start increments cwnd by the number of ACKs received in a round-trip time. Many implementations incorrectly add a small fraction of the segment size (typically the segment size divided by 8) during congestion avoidance. This is wrong and should not be emulated in future releases. See Figure for an example of TCP slow start and congestion avoidance in action.

Fast Retransmit: Fast retransmit avoids having TCP wait for a timeout to resend lost segments. Modifications to the congestion avoidance algorithm were proposed in 1990. Before describing the change, realize that TCP can generate an immediate acknowledgment (a duplicate ACK) when an out-of-order segment is received. This duplicate ACK should not be delayed. The purpose of this duplicate ACK is to let the other end know that a segment was received out of order and to tell it what sequence number is expected. Because TCP does not know whether a duplicate ACK is caused by a lost segment or just a reordering of segments, it waits for a small number of duplicate ACKs to be received. It is assumed that if there is just a reordering of the segments, there will be only one or two duplicate ACKs before the reordered segment is processed, which will then generate a new ACK. If three or more duplicate ACKs are received in a row, it is a strong indication that a segment has been lost. TCP then performs a retransmission of what appears to be the missing segment, without waiting for a retransmission timer to expire. See Figure for an overview of TCP fast retransmit in action.

Page 12: BC0055-Fall Drive Assignments 2012

Fast recovery: After fast retransmit sends what appears to be the missing segment, congestion avoidance, but not slow start, is performed. This is the fast recovery algorithm. It is an improvement that allows high throughput under moderate congestion, especially for large windows. The reason for not performing slow start in this case is that the receipt of the duplicate ACKs tells TCP more than just a packet has been lost. Because the receiver can only generate the duplicate ACK when another segment is received, that segment has left the network and is in the receiver's buffer. That is, there is still data flowing between the two ends, and TCP does not want to reduce the flow abruptly by going into slow start.

The fast retransmit and fast recovery algorithms are usually implemented together as follows:

1. When the third duplicate ACK in a row is received, set ssthresh to one-half the current congestion window, cwnd, but no less than two segments. Retransmit the missing segment. Set cwnd to ssthresh plus three times the segment size. This inflates the congestion window by the number of segments that have left the network and the other end has cached (3).

2. Each time another duplicate ACK arrives, increment cwnd by the segment size. This inflates the congestion window for the additional segment that has left the network. Transmit a packet, if allowed by the new value of cwnd. 3. When the next ACK arrives that acknowledges new data, set cwnd to ssthresh (the value set in step 1). This ACK is the acknowledgment of the retransmission from step 1, one round-trip time after the retransmission. Additionally, this ACK acknowledges all the intermediate segments sent between the lost packet and the receipt of the first duplicate ACK. This step is congestion avoidance, because TCP is down to one-half the rate it was at when the packet was lost.

Page 13: BC0055-Fall Drive Assignments 2012

7. Explain the steps involved in DHCP client/server interaction.

Answer

The following procedure describes the DHCP client/server interaction steps

1. The client broadcasts a DHCPDISCOVER message on its local physical subnet. At this point, the client is in the INIT state. The DHCPDISCOVER message might include some options such as network address suggestion or lease duration.

2. Each server responds with a DHCPOFFER message that includes an available network address (your IP address) and other configuration options. The servers record the address as offered to the client to prevent the same address being offered to other clients in the event of further DHCPDISCOVER messages being received before the first client has completed its configuration.

3. The client receives one or more DHCPOFFER messages from one or more servers. The client chooses one based on the configuration parameters offered and broadcasts a DHCPREQUEST message that includes the server identifier option to indicate which message it has selected and the requested IP address option taken from your IP address in the selected offer.

4. In the event that no offers are received, if the client has knowledge of a previous network address, the client can reuse that address if its lease is still valid until the lease expires.

5. The servers receive the DHCPREQUEST broadcast from the client. Those servers not selected by the DHCPREQUEST message use the message as notification that the client has declined that server's offer. The server selected in the DHCPREQUEST message commits the binding for the client to persistent storage and responds with a DHCPACK message containing the configuration parameters for the requesting client. The combination of client hardware and assigned network address constitute a unique identifier for the client's lease and are used by both the client and server to identify a lease referred to in any DHCP messages. They our IP address field in the DHCPACK messages is filled in with the selected network address.

6. The client receives the DHCPACK message with configuration parameters. The client performs a final check on the parameters, for example, with ARP for allocated network address, and notes the duration of the lease and the lease identification cookie specified in the DHCPACK message. At this point, the client is configured.

7. If the client detects a problem with the parameters in the DHCPACK message (the address is already in use in the network, for example), the client sends a DHCPDECLINE message to the server and restarts the configuration process. The client should wait a minimum of ten seconds before restarting the configuration process to avoid excessive network traffic in case of looping. On receipt of a DHCPDECLINE, the server must mark the offered address as unavailable (and possibly inform the system administrator that there is a configuration problem).

8. If the client receives a DHCPNAK message, the client restarts the configuration process.

9. The client may choose to relinquish its lease on a network address by sending a DHCPRELEASE message to the server. The client identifies the lease to be released by including its network address and its hardware address.

Page 14: BC0055-Fall Drive Assignments 2012

1. What do you mean by “OPTION NEGOTIATION”? Explain with an example.

Answer

Using internal commands, Telnet is able to negotiate options in each host. The starting base of negotiation is the NVT capability: Each host to be connected must agree to this minimum. Every option can be negotiated by the use of the four command codes WILL, WONT, DO, and DONT. In addition, some options have sub options.Telnet Basic Commands The primary goal of the Telnet protocol is the provision of a standard interface for hosts over a network. To allow the connection to start, the Telnet protocol defines a standard representation for some functions: IP Interrupt Process AO Abort Output AYT Are You There EC Erase Character EL Erase Line SYNCH Synchronize

Page 15: BC0055-Fall Drive Assignments 2012

2. Explain the principle of operation of REXEC protocol.

Answer

Remote Execution Command Daemon (REXECD) is a server that allows the execution of jobs submitted from a remote host over the TCP/IP network. The client uses the REXEC or Remote Shell Protocol (RSH) command to transfer the job across to the server. Any standard output or error output is sent back to the client for display or further processing.

Principle of Operation REXECD is a server (or daemon). It handles commands issued by foreign hosts and transfers orders to subordinate virtual machines for job execution. The daemon performs automatic login and user authentication when a user ID and password are entered. The REXEC command is used to define the user ID, password, host address, and the process to be started on the remote host. However, RSH does not require you to send a user name and password; it uses a host access file instead. Both server and client are linked over the TCP/IP network. REXEC uses TCP port 512 and RSH uses TCP port 514. See Figure for more details

Page 16: BC0055-Fall Drive Assignments 2012

3. List and explain the generic SNMP traps.

Answer

The fundamental use of the Simple Network Management Protocol (SNMP) is to manage all aspects of a network, as well as applications related to that network. To do this, SNMP has been architected to perform two services:

Monitor: SNMP implementations allow network administrators to monitor their networks in order to – among other things – ensure the health of the network, forecast usage and capacity, and in problem determination. Aspects which can be monitored vary in granularity, and can be something as global as the total amount of IP traffic experienced on a single host, or can be as minute as the current status of a single TCP connection. Additionally, the SNMP architecture allows components of the SNMP model to notify network administrators should a problem occur on a network. For example, if a link were to break or an interface to deactivate for some reason, SNMP can send a message to alert network administrators that this has occurred.

Manage: In addition to monitoring a network, SNMP provides the capability for network administrators to affect aspects with the network. Values which regulate network operation can be altered, allowing administrators to quickly respond to network problems, dynamically implement new network changes, and to perform real-time testing on how changes may affect their network. SNMP implements a manager/agent/subagent model, which conforms very closely to the client/server model. RFC 1157 defines the components and interactions involved in an SNMP community, which include:

A Management Information Base

A SNMP agent

A manager

SNMP subagents

Traps are asynchronous notifications of events occurring within an SNMP community. They can be generated both by SNMP agents and SNMP subagents. Additionally, they can be RFC architected (called a generic trap type) or they can be proprietary (called enterprise-specific). Architected traps, defined in RFC 1215, are as follows: ColdStartEvent: Notifies managers that the SNMP agent is reinitializing and that the configuration might have been altered. This trap belongs to the RFC 1213-architected System group, and is therefore supported by the SNMP agent.

WarmStartEvent: Notifies managers that the SNMP agent is reinitializing, but there has been no alteration of the configuration. This trap belongs to the RFC 1213-architected System group, and is therefore supported by the SNMP agent.

LinkDownEvent: Notifies managers that an interface has been deactivated. Information identifying the interface is included in the trap. This trap belongs to the RFC 1213-architected Interface group and is usually supported by a TCP/IP specific subagent.

Page 17: BC0055-Fall Drive Assignments 2012

LinkupEvent: Notifies managers that an interface has been activated. Information identifying the interface is included in the trap. This trap belongs to the RFC 1213-architected Interface group and is usually supported by a TCP/IP-specific subagent, or by an SNMP agent that manages its own TCP/IP MIBs.

Page 18: BC0055-Fall Drive Assignments 2012

4. Discuss FTP proxy transfer through firewall.

Answer

FTP provides the ability for a client to have data transferred from one FTP server to another FTP server. Several justifications for such a transfer exist, including:

To transfer data from one host to another when direct access to the two hosts are not possible.

To bypass a slow client connection.

To bypass a firewall restriction.

To reduce the amount of traffic within the client’s network

The process of setting up a proxy transfer begins with the use of a proxy open command. Any FTP command can then be sent to the proxy server by preceding the command with proxy. For example, executing the dir command lists the files on the primary FTP server. Executing the proxy dir command lists the files on the proxy server. The proxy get and proxy put commands can then be used to transfer data between the two hosts.

1. The FTP client opens a connection and logs on to the FTP server A.

2. The FTP client issues a proxy open command, and a new control connection is established with FTP server B.

3. The FTP client then issues a proxy get command (though this can also be a

proxy put).

4. A data connection is established between server A and server B. Following data connection establishment, the data flows from server B to server A.

Page 19: BC0055-Fall Drive Assignments 2012

5. Differentiate between getNextRequest and getBulkRequest taking an appropriate example.

Answer

The GetBulkRequest is defined in RFC 3416 and is thus part of the protocol operations. A GetBulkRequest is generated and transmitted as a request of an SNMPv2 application. The purpose of the GetBulkRequest is to request the transfer of a potentially large amount of data, including, but not limited to, the efficient and rapid retrieval of large tables. The GetBulkRequest is more efficient than the GetNext request in case of retrieval of large MIB object tables. The syntax of the GetBulkRequest is: GetBulkRequest [non-repeaters = N, max-repetitions = M] (RequestedObjectName1, RequestedObjectName2, RequestedObjectName3) Where:

RequestedObjectName1, 2, 3 MIB object identifier, such as sysUpTime. The objects are in a lexicographically ordered list. Each object identifier has a binding to at least one variable. For example, the object identifier ipNetToMediaPhysAddress has a variable binding for each IP address in the ARP table and the content is the associated MAC address.

N Specifies the non-repeaters value, which means that you request only the contents of the variable next to the object specified in your request of the first N objects named between the parentheses. This is the same function as provided by the GetNextRequest.

M Specifies the max-repetitions value, which means that you request from the remaining (number of requested objects - N) objects the contents of the M variables next to your object specified in the request. Similar to an iterated GetNextRequest but transmitted in only one request. With the GetBulkRequest, you can efficiently get the contents of the next variable or the next M variables in only one request Assume the following ARP table in a host that runs an SNMPv2 agent: Interface-Number Network-Address Physical-Address Type 1 10.0.0.51 00:00:10:01:23:45 static 1 9.2.3.4 00:00:10:54:32:10 dynamic 2 10.0.0.15 00:00:10:98:76:54 dynamic

Page 20: BC0055-Fall Drive Assignments 2012

6. What is content negotiation? Discuss its types

Answer

Content negotiation: In order to find the best handling for different types of data, the correct representation for a particular entity body should be negotiated. There are three types of negotiation:

o Server-driven negotiation: The representation for a response is determined according to the algorithms located at the server.

o Agent-driven negotiation: The representation for a response is determined according to the algorithms located.

o Transparent negotiation: This is a combination of both server-driven and agent-driven negotiation. It is accomplished by a cache that includes a list of all available representations.

Page 21: BC0055-Fall Drive Assignments 2012

7. List and explain various fields of HTTP messages.

Answer

HTTP message: HTTP messages consist of the following fields: Message types: A HTTP message can be either a client request or a server

response. The following string indicates the HTTP message type: HTTP-message = Request | Response

Message header: The HTTP message header field can be one of the following: General header

Request header

Response header

Entity header

Message body: Message body can be referred to as entity body if there is no transfer coding has been applied. Message body simply carries the entity body of the relevant request or response.

Message length Message length indicates the length of the message body if it is included.

General header field: General header fields can apply both request and response messages. Currently defined general header field options are as follows:

o Cache-Control

o Connection

o Date

o Pragma

o Transfer-Encoding

o Upgrade

o Via

Request: A request message from a client to a server includes the method to be applied to the resource, the identifier of the source, and the protocol version in use. A request message field is as follows: Request = Request-Line *( general-header | request-header | entity-header ) CRLF [ message-body ]

Response: An HTTP server returns a response after evaluating the client request. A response message field is as follows: Request = Request-Line *( general-header | request-header | entity-header ) CRLF [ message-body ]Entity: Either the client or server might send Entity in the request message or the response message, unless otherwise indicated. Entity consists of the following:

Entity header fields

Page 22: BC0055-Fall Drive Assignments 2012

Entity body