Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 3 Next »

TCP/IP is the most commonly used protocol today, and it dominates the internet completely. Services such as web (HTTP) and file transfer (FTP) use the TCP/IP protocol.

The following is a brief introduction to TCP/IP; it isn't meant to be a in-depth technical description. For details about TCP/IP, the various RFCs that define the internet protocols at www.faqs.org/rfcs/ are a great resource. RFC1180 is a TCP/IP tutorial and a good starting point. The book TCP/IP Illustrated by W. Richard Stevens (Addison-Wesley 1994 ) is also an excellent resource.

TCP/IP is a connection-oriented protocol. This means that a connection is kept between two parties for a period of time. The two parties that communicate are usually referred to as client and server. Communication between the client and server takes place in the form of packets. Each packet holds a number of bytes.

Trains

A number of packets flowing in one direction without any packets in the other direction is called a train.

The following is an example of the role of trains in client-server communication:

  • A client sends a request to a server; the request is small enough to be contained in a single packet. That's the first train in the illustration's communication flow.
  • The server responds; the response requires three packets. That's the second train.
  • The client then sends another request, and the server responds. That's the third and fourth train in the illustration.

Looking at sent and received trains can be useful when analyzing performance, for example in order to identify missing responses or services that cause excessive amounts of trains to be sent.

 What if the client sends another request before the server has finished responding?
In such cases PerformanceGuard will consider that to be a new train. Thus, if the server's response consist of ten packets, but the server gets "interrupted" by the client after having sent only five of the ten packets, the remaining five packets will be seen by PerformanceGuard as a new train. In communication that has a well-defined client-server relationship (for example HTTP or NetBIOS) this isn't a problem, because the involved parties don't interrupt each other. However, with some protocols that don't have a clearly defined client-server relationship (for example Telnet or ICA), train information in PerformanceGuard will be less useful.

What if the client sends another request before the server has finished responding? In such cases PerformanceGuard will consider that to be a new train. Thus, if the server's response consist of ten packets, but the server gets "interrupted" by the client after having sent only five of the ten packets, the remaining five packets will be seen by PerformanceGuard as a new train. In communication that has a well-defined client-server relationship (for example HTTP or NetBIOS) this isn't a problem, because the involved parties don't interrupt each other. However, with some protocols that don't have a clearly defined client-server relationship (for example Telnet or ICA), train information in PerformanceGuard will be less useful.

Application Data and Handshakes

Two types of information are exchanged between the server and client:

  • Application data
  • Handshakes.

Whenever a connection is established or torn down, a number of handshakes are exchanged between the server and client. These handshakes are sent in separate packets without application data. During the lifetime of a connection, handshakes are sent either as separate packages or as part of packets that carry application data. Only packets that contain application data are considered when PerformanceGuard measures response times.

When a client sends a request to a server, it sends one or more packets to the server. The server then processes the request and sends one or more packets back to the client.

PerformanceGuard Response Time

The time elapsed between the last request-packet has been sent and until the first reply-packet is received from the server.

Search this documentation

On this page

In this section

  • No labels