CS 176B -- Network Computing
Homework Assignment #2
Due by 11:59pm on Wednesday, February 6, 2002
While TCP is a very good protocol, it is not useful for all applications.
For example, TCP is not good at all for real-time streaming audio (or video).
Consider the scenario where a user has a 56 kbps connection and they want
to receive an audio file. Either that file can be delivered in its entierty
before the user starts listening or it can be streamed at whatever rate
is necessary to sustain playback. The first option has the drawback that
it has a long initial delay (you cannot start playing the file until all or
most of it has been downloaded). The second option has the drawback that if
the path between the server and the client cannot sustain the streaming
bandwidth, the client might lose a significant portion of the streamed data.
As greater bandwidth becomes more available in the Internet, the use of
the streaming option is becoming more viable. When audio (or video) is
streamed, UDP is used instead of TCP. The reason is that TCP does
re-transmission of data. It is not the re-transmission that is bad, but
the fact that when TCP discovers a loss, it stops and handles the loss
before continuing with the initial transmission.
This does not work at all for streaming. Instead of stopping the loss
should just be ignored. A little bit of static is better than stopping
the stream completely. Therefore, UDP is often used.
The problem with UDP though is that it does not have congestion control.
While we have not yet studied the need for congestion control or techniques
to handle it, understanding the problem is straightforward enough. The
problem is that if a server sends data too quickly it will cause congestion
in the network. A result of congestion is lost data. This creates a
delicate situation for streaming UDP: send too slow and not enough data
gets to the client. Send too quickly and a lot of data might be lost.
The goal of this assignment is to understand the challenges of using
UDP to stream data from a server to a client. A really hard assignment
would be to implement both full TCP-style congestion control and 100%
reliability on top of UDP. Since we only have two weeks, this assignment
will just give you a taste of some of these functions.
In this assignment you will write a client and server. The function of
the client and server is essentially to have the server stream a set
of packets to the client and then have the client figure out if any of
the packets were lost. Initially the client will send a request to
the server requesting a specified bit rate. The server will send at
this bit rate and the client will figure out if it is too slow or too
fast based on the amount of loss. The client will then send a new bit
rate to the server and will repeat the process for a specified number
Now, breaking this functionality into a deeper level of detail, the
operation starts with the server running and waiting for a request
from the client. When the client starts, it will first
send a request to the server for a stream of packets at a given
rate. In this request, the client will specify a number of parameters
including the number
of packets to send, the transmission time between packets, and the
packet size. The values of these parameters can be used to calculate
the bandwidth of the stream sent by the server. Upon receiving this
request, the server will send a stream of packets (each packet will
have an application layer sequence number) to the client.
The client will obvserve the number of packets lost and compute the
actual bandwidth used.
The above set of
actions constitutes a ``round''. The client will run a number
of rounds. At the start of each round the client will send a new
request to the server and will either increase or decrease the rate
at which the server sends packets. The change in rate will depend
on whether any packets were lost from the server.
The things that are important for this assignment include the
1. Server command line parameters.
2. Client command line parameters.
3. Format of the message from the client to the server.
4. Format of the data packets sent from the server to the client.
5. Operation and output of the client.
6. Ending the test and closing the programs.
1. Server command line parameters
This is relatively straightforward. It is just the port number
that the server is going to listen on. See the Operational
Examples section for sample input.
2. Client command line parameters
All of the parameters that the client needs are given as
command line parameters. There is no input needed once
the program starts running. This makes testing easier.
The set of parameters include:
<server_name>: The server's host name or IP address
<server_port>: The server's port number
<num_rounds>: The number of rounds to run
<num_packets>: The number of packets to be sent per round
<pkt_size>: The size of each packet the server should send (in bytes)
<xmit_time>: The inter-packet transmission time (in milliseconds), i.e.
the sleep time between the transmission of two packets
<increment>: The size of increment to adjust the inter-packet transmission time after each round (in milliseconds)
See the Operational Examples section for sample client input.
3. Format of the message from the client to the server
The message should be sent as a 4 integers and the four values
represented by these fields should be:
<num_rounds> <num_packets> <pkt_size> <xmit_time>
4. Format of the data packets sent from the server to the client
The data sent between the server and client is pretty much irrelevant.
Instead of adding additional complexity by having the server
open a file and read data, you can simply create random data
to send to the client. The client will not care what is sent.
However, while the data itself is not important, what is important
is whether the packet carrying the data is received or not. In
order for the client to make this distinction, the server will send
data packets with sequence numbers. The sequence number will be
an integer prepended to each packet. Therefore, the size of the
packet actually sent will be the pkt_size plus 4 bytes. The
first sequence number will always be zero (0) and will increment up to
INT_MAX and will then wrap to zero again. Your program should be able
to handle this sequence number wrap.
5. Operation and output of the client
Client operation is relatively straightforward. After starting
and receiving the command line parameters, it should
send a request to the server. Then it should start waiting to
receive data packets.
From here, the operation turns into a bunch of conditionals.
First, if the client receives num_packets then everything
works very simply. Second, if any but the last packet are lost, the
client can use the sequence number and immediately tell which and
how many packets were lost. Now it becomes a little trickier...
If the last packet is lost the client will not know whether
to continue waiting or assume the packet is lost. The problem
is actually worse than this. What if the last two packets
are lost, or even the last three! Therefore, you will
need to use a timer after each packet is received. If no additional
packets are received within two seconds, you should assume
all the remaining packets have been lost and continue to the
end-of-round processing step.
Be aware that after a request is sent, it is entirely
possible that NO packets are received. After the request
is sent, you should wait 10 seconds to receive the first
packet. If no packet is received within the first 10
seconds, you should assume 100% loss. (While this scenario
is pretty unlike (at least the first packet should make it),
your program should be able to handle this possibility.)
Once a round ends, your program should print the following
End of Round AA
Packets Expected: BB
Requested Bandwidth: CC bps
Packets Received: DD
Loss Rate: EE.E%
Actual Bandwidth: FF bps
Adjusting Bandwidth from XX to YY (new inter-transmission time is ZZ ms)
This should continue for as many rounds as specified in client's
6. Ending the test and closing the programs
Once the client has finished the number of tests specified in
the command line, it should send a final request to the server
with all values as "-1". Therefore, the message sent should
be "-1 -1 -1 -1". After sending this message the client
For the server, once it receives the "-1 -1 -1 -1" message,
it too should exit. Of course it is possible that the server
never receives this message. Therefore, the server should
use the following procedure to avoid hanging and waiting forever.
When the first request
is received, the server should satisfy the request. Once
the last packet is sent to the client, the server should start
a timer. If a new request or a final request is not received
within 10 seconds, the server should simply exit.
The server is straightforward. It essentially returns no feedback.
It runs and then it exits.
dagwood> server 32000 <enter>
[Eventually, after the server responds, it should just exit.]
Here is an example from the client's perspective:
blondie> client dagwood.cs.ucsb.edu 32000 2 100 1000 1000 100 <enter>
End of Round 1
Packets Expected: 100
Requested Bandwidth: 8000 bps
Packets Received: 100
Loss Rate: 0%
Actual Bandwidth: 8000 bps
Adjusting Bandwidth from 8000 bps to 8888.9 bps (new inter-transmission time is 900 ms)
End of Round 2
Packets Expected: 100
Requested Bandwidth: 8888.9 bps
Packets Received: 70
Loss Rate: 30%
Actual Bandwidth: 6222.3 bps
Adjusting Bandwidth from 8888.9 bps to 8000 bps (new inter-transmission time is 1000 ms)
Beyond This Assignment
There are a number of things to keep in mind when doing this
- The protocol that you will be implementing is essentially
a stop-and-wait algorithm that roughly attempts to find the
bandwidth from the server to the client (note that the
bandwidth from the client to the server may be something
completely different). While you don't have to implement the following,
at least give it some thought to what it would take to modify your
program to provide the following functions:
- Run the rate adaptation in real-time, i.e. have the
server send an acknowledgement every time a packet is received
and have the client adjust its rate when it receives an
acknowledgement or a group of acknowledgements.
- Create a more sophisticated algorithm that not only
consumes the most bandwidth available but is fair to other
- Attempt to also do re-transmissions so no data is lost
in the transfer. Also remember that reliability implies
in-order delivery so you would have to buffer at the client
in the case of missed packets.
You should turn in the source code for your client and
server. Remember that you can do this assignment in either
Java or C. In either case, you should turn in only two
programs: the client and server. The names of these two
files should either be server.c and client.c
or server.java and client.java.
There is no hard copy turnin for this
assignment. Be sure to include your name in each program
that you turnin.
To turn in assignments, use the following
command from the Computer Science CSIL lab:
csil-machine> turnin hw2@cs176b hw2
NOTE: The final "hw2" is a local directory containing the
2 and only 2 programs you are turning in. Be certain to name
this directory exactly "hw2".
ANOTHER NOTE: It is highly recommended that you use the
CSIL machines to do this assignment. All of the tools you
will need are available there; it significantly improves our
ability to help you if you have problems; and it ensures that
if your programs work there, they will work when we grade them.
Your client and server will be tested against a separately
written client and server. The goal of this testing is to
check your two programs for operational correctness against
the specfication as described above. The assignment will
be graded out of a maximum of 100 points.
In addition to correctness, part of your grade will depend
on how well your code is written and documented. NOTE: good
code/documentation does not imply that more is better. The
goal is to be efficient, elegant and succinct!