Data Communications II Autumn 2002
Problem set 1. (17. -19.9.2002)
The TCP layer gets data to send from an application. Suppose
the amount of data is 15 Kbytes and in the net it is possible to
send segments of 1500 databytes. Explain alltogether what kind of
segments and in which order are transmitted between the sender and
the receiver in order to get the data transmission carried out.
Explain also the content of the relevant fields in these segments.
You can assume that the data transmission is not constrained by the
receiver window neither by the congestion window.
- A TCP connection has the line
transmit rate of 100 Mbps and the round-trip time (RTT) of 100 ms.
- What is the maximun transmission
rate of the connection without using the options of the TCP
protocol?
- What window scale option gives
the full line speed (transmission rate) for this connection?
How is the window scale option used?
- Show as a diagram how the 'basic'
TCP (TCP with slow start, retransmission timer, fast retransmit and
fast recovery) handles the following situations. MSS is the default
MSS = 536 bytes (RFC 879) and transmission rate is 100 kbps.
- In the very beginning of the
transmit the very first packet arrives corrupted to the receiver.
The size of the congestion window is 2 MSS.
- The size of the congestion window
is 8 MSS. At first the sender succeeds to send one segment
correctly, but after that an error burst corrups the following 3
packets.
- The size of the congestion window
is 8 MSS. The second segment is routed to a congested router and it
arrives to the receiver first after the fourth segment sent. Other
segments arrive in order.
In all these cases the round-trip-time RTT is 200 ms and value of
the retransmission timer is set to 3 * RTT. The transmission is
constrained only by the congestion window.
Show with a diagram how the different cases of the former
problem will be handled with a TCP using limited transmit. In what
situations limited transmit seems to be usefull?
Find out what is meant by "exponential retransmission
timer backoff". How does it function? Why is it needed? What
benefits does it give?
- The sending TCP follows the Nagle
algorithm.
- An application sends data byte by
byte. How is it possible to force the TCP to send the data, even
one byte, immediately after it has arrived? Are there situations,
when this kind of fast but unefficient data transfer is needed and
necessary?
Could the Nagle algorithm be useful in avoiding the silly
window syndrome ?