Over the past decade, the speed of computer and telecommunications networks has improved substantially This rapid growth in speed is expected to continue over the next decade, because many new applications in important areas such as data and video will demand very high network bandwidths. Each device on a network has a limited amount of memory for storing the data that travels over the network. When the amount of data on the network is excessive, the data that the memory can't be processed has to be retransmitted resulting in congestion. Network congestion thus will contribute to data loss in a phenomenon known as a "bottleneck" because only a limited numbers of data packets are processed when compared with all of the packets sent. Therefore, congestion in network occurs due to exceed in aggregate demand as compared to the accessible capacity of the resources. Network congestion will increase as network speed increases, especially to handle traffic of today’s very high speed networks.
As the network itself is based upon queuing principal, the common cause of this scenario is definitely a mismatch in speed between networks. For example, a typical network (LAN) environment might be still using a legacy interface at speed of 10 Mbps Ethernet connections but the servers are using high speed network technology such as asynchronous transfer mode (ATM), thus data flowing from the servers at 155 Mbps to the clients at 10 Mbps will experience congestion at the interface between the ATM and Ethernet networks.
Another common cause is that congestion usually occurred inside a network node that has multiple ports. Such a node can be a switch or a gateway such as a router, congestion arises when data, destined for a single output port, arrive at many different input ports. The faster and more numerous these input ports are, the severer the congestion will be. As a consequence of the loss of