Aditya Saxena

Throughput in Computer Networks

Throughput in computer networking is the rate of successful transmission of messages over a channel of communication, such as packet radio, Ethernet, etc. The data contained in these messages may be delivered through network nodes, or over logical or physical links. Throughput is mostly measured in B.P.S. (Bits Per Second).

The throughput of a network can be influenced by several factors, such as the behavior of the end-user, the processing potential of the components of the system, limitations of the underlying physical medium, etc. When considering several protocol overheads, the meaningful rate of the transfer of data may be slower than the maximal throughput achievable.

What Throughput is in Computer Networking?

The number of messages transferred successfully per unit of time is referred to as throughput. Throughput value depends upon many factors such as hardware capabilities, available signal-to-noise ratio, available bandwidth, etc.

The maximal throughput of a computer network may be greater than the throughput achieved regularly. When various protocol costs are considered, the use rate of the transmitted data can be quite smaller than the maximum throughput achievable.

How Do We Measure Network Throughput?

We measure throughput by tabulating the quantity of data transmitted between several locations during a certain time interval. For a better understanding, let us have a look at an example. Suppose there is a highway with a capacity of a maximum of 200 vehicles moving across it at a time. But at an arbitrary instant of time, it was observed that only 150 vehicles moved across it. Here 200 vehicles per unit of time is the capacity, also known as bandwidth, but the throughput is 150 vehicles at the very instant of observation.

Let us take another example that is more technical and precise. Assume there’s a network having a bandwidth of 10 M.B.P.S. (Mega Bytes Per Second). It can pass only 12,000 frames per minute on average. Each frame consists of 10,000 bits on average. The throughput of this network may be computed as:

(12,000 x 10,000) / 60 = 2 MBPS

Factors Affecting Network Throughput

Transmission Medium Limitation

The bandwidth of a transmission medium limits the throughput of the medium. For instance, a FastEthernet interface theoretically provides a 100 MBPS data transmission rate. Thus any amount of data traffic sent, cannot exceed the 100 Mbps limit.

Network Congestion

The extent of congestion in a network also influences its throughput. For instance, driving a car on an empty 4-lane highway is far superior in comparison to when there are many other cars on the same highway as well. The more congested the network, the lesser its throughput available on that network will be.

Latency

The time required by a packet to get transferred from a sender to its destination is known as latency. In certain kinds of traffic, the greater the latency in the network, the smaller the throughput is.

Enforced Limitation

If an organization purchases say 5 Mbps, link capacity via an I.S.P. (Internet Service Provider). As per existing technologies, the ISP will use a transmission medium that may be able to deliver more than the 5 Mbps capacity being demanded. Hence the ISP shall explicitly enforce the 5 Mbps cap on the link.

Packet Loss and Errors

Packet loss and errors can affect the throughput of specific types of traffic, because lost/bad packets may require retransmission. This decreases the average throughput between the communicating devices.

Protocol Operation

The protocol used in delivering and carrying the packets over a link can also affect the throughput. For example, the features of congestion avoidance and flow control in TCP. These features influence the timing as well as the amount of data that may be sent from one device to another.

Network Throughput Optimization

The most significant thing that can be done to optimize throughput is to minimize network latency. Latency reduces the throughput which, in turn, leads to bad network performance. The most popular reason behind latency is the existence of several people simultaneously using the network. Latency increases even when many people download at the same time.

The counterpart to roadway traffic jams in I.T. is network bottleneck. The network traffic gets congested throughout the day and reduces the network’s performance. For instance, the network performance is found to be bad after lunch in a big organization because all the employees return to work. Bottlenecks may be addressed in various ways, such as upgrading routers to cope with traffic, decreasing the number of nodes used by your network to reduce the distance that needs to be traveled by the packets, etc.

Throughput vs Bandwidth

CriterionBandwidthThroughput
Real-world comparisonThe speed at which water comes out of a tube.The total quantity of water that comes out.
Concerned withTransmission of data by some medium.Communication between two parties
Influencing factorsUnaffected by physical hindrances since it’s a theoretical unit to some extent.It may be impacted by a host of other types, errors in transmission, network devices, network traffic changes, interference, etc.
DefinitionIt’s the maximum quantity of data that can be transferred from one point to another.It’s the actual measurement of the data that transmits through the medium at a given instant.
Layer of relevancePhysical layer.Any layer.
Dependence on latencyIndependent.Dependent.
Unit of measurementBitsBit Per Second
throughput-vs-bandwidth

Conclusion

  • Throughput in computer networking is the rate of delivery of messages successfully over a channel of communication, such as packet radio, Ethernet, etc.
  • The maximal throughput of a computer network may be greater than the throughput achieved regularly.
  • We measure throughput by tabulating the quantity of data transmitted between several locations during a certain time interval.
  • The most significant thing that can be done to optimize throughput is to minimize network latency.

Author