The performance of a system in batch and transaction processing depends in broad terms on the times the work takes to traverse the processor(s), the I/O subsystem and the network (in two directions). The time for any unit of work is the sum of these times, which will comprise the native time of an unconstrained system plus the wait time (delays) involved

Network performance is thus an integral player in response times and throughput of work in a system. Its performance depends on, and can be quantified as:

  • WAN or LAN speed (Mbits/second)
  • WAN or LAN transmission quality; errors = retransmissions = overhead
  • Bandwidth, the maximum transmission rate, known as throughput
  • Latency or time for the transit of a packet of data, that is, response time
  • Other delays, including software overhead, transmission nodes delay
  • Transmission Reliability, lossy or lossless, leading to retransmission overhead
  • Jitter, the variation in delay times
  • Data Volumes and frequencies of transmission
  • Frame/Block Sizes of data parcels
  • Network Protocol employed
  • Intermediate Nodes (peer systems), numbers and their characteristics
  • Other nodes - routers, boosters, switches etc. and their protocol exchanges
  • Buffer sizes thoughout the network; bigger isn’t always better
  • Design of the client/server application(s) and their personal interface protocol
  • Security software and resulting performance overheads. For example, security checks, encryption/decryption, compression/decompression
  • Ancillary Services involved
  • The use of ‘speedup’ techniques, such as compression and WAN acceleration.

There is a lot more to network performance than you thought, I’ll wager. The final item in the list is the subject of the rest of this paper.

network performance acceleration fiber ethernet thinkstock photos arcoss
– ~Thinkstock / arcoss

Native performance

The native performance is the time taken by a data packet to traverse it though the transmission medium and the intervening network components, such as routers. This is essentially fixed, dictated by line speeds and the efficiency of nodes in the network. However, certain parameter changes and techniques can speed up the packet transmission time or latency.

Another way of looking at this is via equations, exemplified here using network scenarios. For a simple point-to-point communication:

Network Time = M/B

Equation 1. Response/Throughput ‘best ever’ - native

where M is the message size (headers + data payload) and B the bandwidth of the transmitting medium (preferably in the same units! Don’t mix your bits with your bytes).

This is the best you’ll ever get result (native latency) which assumes any nodes in the network have infinite speed.

For a more realistic equation, which recognizes there is extra hardware AND software involved in such transmissions, but assumes the software and hardware speeds are fixed: .

Network Time = M/B+ Node Delay

Equation 2. Response/Throughput, network with nodes

If we assume that fixing the hardware is too disruptive and expensive, it leaves the software and sneaky techniques as avenues to increase the apparent speed of a network. Some of them actually increase throughput and thereby the apparent speed of the network; others are in essence traffic avoidance and some of these are dealt with next.

Network acceleration/optimization

These techniques usually go under the title WAN acceleration or WAN optimization. The normal definition is the optimization of available bandwidth in the areas:

  • Don’t send what you don’t need to send, especially large files
  • If it must be sent, try to schedule it appropriately so as not to interfere with critical workloads. Use business prioritization as the decision yardstick
  • Use techniques to optimize use of the available bandwidth (discussed below).

Some of the tricks of the optimization trade are (see Metzler reference below):

LayerOptimization Area(s)Comment


Application optimization

Careful control of how and when data is transmitted elsewhere

Layers 7, 6

Deduplication, data compression, caching

Deduplication is often used in backup/disaster recovery (see notes below this table)


TCP and appropriate protocol optimization

Adjusting the network congestion avoidance parameters of TCP connections over high-bandwidth, high-latency networks

Network layer

Route optimization, forward error correction (FEC)

See the notes following this table

Data link layer

Header compression


Table 1. WAN Optimization (Speedup) Techniques

  1. Caching: this is the storage of data transmitted from a source to a destination at that destination. If the same data is requested at the destination, the optimization software recognizes this and stops any request to the original source for a retransmission.
  2. Deduplication: ‘Data deduplication is the replacement of multiple copies of data (at various levels of granularity) with references to a shared copy in order to save storage space and/or bandwidth’ (SNIA Definition). Data deduplication can operate at the file, block or bit level. It is often used in networking data to a backup site geographically distant as a time saver and disaster recovery architectures.
  3. Compression: this is fairly obvious and the data transmission is reduced by an amount dictated by the efficiency of the data compression/decompression algorithms used.
  4. FEC (Forward Error Correction):A ‘receiver makes it right’ transmission technique where extra bits are added to a packet/message for analysis at the receiving end. In general, it means that the receiving end of the transmission is able to detect, and in most cases correct, any erroneous transmissions. Packets warranting retransmission may be:
    - corrupted due to errors, for example noise
    - lost in link or host failures
    - dropped due to buffer overflowdropped due to aging or sell by date exceeded, for example the TTL (time to live) field in IP (internet protocol)
  5. Traffic shaping: Traffic shaping is the practice of regulating network data transfer to assure a certain level of performance, quality of service (QoS). The practice involves favouring transmission of data from higher priority applications over lesser ones, as designated by the business organization. It is sometimes called packet shaping and is one of the sneaky technique
  6. Congestion Control: This TCP function is designed to stop the sender shipping more data than the network can handle, as if trying to drink from a fire hose. TCP uses a number of mechanisms based on a parameter called the congestion window.
  7. Protocol acceleration: A class of techniques for improving application performance by avoiding or circumventing shortcomings of various protocols.

There are several forms of protocol acceleration:

  • TCP Acceleration
  • CIFS (Common Internet File Systems) and NFS (Network File System) Acceleration
  • HTTP (Hypertext Transfer Protocol) Acceleration
  • Microsoft Exchange Acceleration.

See references 1. and 2. below this section which present good coverage of some of the factors listed above.

Remember: Don’t get carried away with exciting acceleration functions however since you still have to ‘attach’ these to a working system with minimum or zero disruption.

This article was written by Terry Critchley, author of the books below which provided most of the detail in this paper: