Surya Prateek Surampalli
Information Technology Department, Southern Polytechnic State University
ssurampa@spsu.edu
Abstract—in high-traffic Internet today, it is often desirable to have multiple servers that represent a single logical destination server to share the load. A typical configuration comprises multiple servers behind a load balancer that would determine which server would serve the request of a client. Such equipment is expensive, has a rigid set of rules, and is a single point of failure. In this paper, I propose an idea and design for an alternative load-balancing architecture with the help of an OpenFlow switch connected to a NOX controller that gains political flexibility, less expensive, and has the potential to be more robust to failure with future generations of switches
I. Introduction In today’s increasingly internet-based cloud services, a client sends a request to URL or a logical server and receives a response from a potentially multiple servers acts as a logical address server. Google server is said to be the best example, the request is sent to server farm as soon as the client resolves the IP address from the URL [1].
Load balancers are expensive that acts as a reverse proxy and distributes network or application traffic across a number of servers. Load balancers are used to increase capacity (concurrent users) and reliability of applications. They improve the overall performance of applications by decreasing the burden on servers associated with managing and maintaining application and network sessions, as well as by performing application-specific tasks [1]. Since load balancers are not basic equipment and run custom software, policies are rigid in their choices. Specific administrators are required and also the arbitrary policies are not possible to implement. Since running policy and the switch are connected it is reduced to a single point of failure [2].
References: [1] OpenFlow Switch Specification. Version 0.8.9 (Wire Protocol 0x97). Current maintainer: Brandon Heller (brandonh@stanford.edu). December 2, 2008. [5] C. E. Leiserson. Fat-trees: Universal networks for hardware-efficient supercomputing. IEEE Transactions on Computers, 1985. [6] T. Benson, A. Anand, A. Akella, and M. Zhang. Understanding Datacenter Traffic Characteristics. SIGCOMM WREN workshop, 2009. [7] HOPPS, C. Analysis of an Equal-Cost Multi-Path Algorithm. RFC 2992, IETF, 2000. [8] W. J. Dally and B. Towles. Principles and Practices of Interconnection Networks. Morgan Kaufmann Publisher, 2004. [10] N. McKeown, T. Anderson, H. Balakrishnan, G. Parulkar, L. Peterson, J. Rexford, S. Shenker, and J. Turner. OpenFlow: Enabling Innovation in Campus Networks. ACM SIGCOMM CCR, 2008. [11] R. N. Mysore, A. Pamporis, N. Farrington, N. Huang, P. Miri, S. Radhakrishnan, V. Subramanya, and A. Vahdat. PortLand: A Scalable, Fault-Tolerant Layer 2 Data Center Network Fabric. ACM SIGCOMM, 2009. [13] B. Lantz, B. Heller, and N. McKeown. A Network in a Laptop: Rapid Prototyping for Software-Definded Networks. ACM SIGCOMM, 2010. [14] Y. Zhang, H. Kameda, S. L. Hung. Comparison of dynamic and static load-balancing strategies in heterogeneous distributed systems. Computers and Digital Techniques, IEE, 1997. [16] N. Handigol, S. Seetharaman, M. Flajslik, N. McKeown, and R. Johari. Plug-n-Serve: Load-balancing web traffic using OpenFlow. ACM SIGCOMM Demo, 2009. [17] R. Wang, D. Butnariu, J. Rexford. OpenFlow-Based Server Load Balancing Gone Wild. Hot ICE, 2011. [18] M. Koerner, O. Kao. Multiple service load-balancing with OpenFlow. IEEE HPSR, 2012.