Part 3. Comparison (10 points)
In this part, you’ll answer questions about your implementation of both servers.
Your task
When you test your server, with the provided client, the client should print out the time elapsed (in microseconds). If you don’t have those numbers, run both TCP and UDP client/server again and make note of the results.
Microseconds are a very small unit of measurement - there are 1 million microseconds in a second! We’re using this unit of measurement because our messages don’t have very far to travel. Both the client and server are running on the same machine, which means it doesn’t take long for the message to reach the destination. In a real network, where the server may be hundreds of miles away from the client, it would take much longer (on the order of milliseconds).
Answer the following questions in the README.
- Write down the time elapsed for both your UDP and TCP implementations.
- Which one took longer? Why do you think it took longer? Or, if they are the same, why do you think so?
- Look at your implementations for the TCP and UDP servers. What are the main differences between them?
- Part 2 mentions that you should use a number greater than 10 for the size of your array. Why do you think this is? What would happen if you used a size of 10? (Feel free to run it and look at the results.)
At this point, you’re all done and should submit your work.