Anirban Basu,
Dept of CSE R&D, East Point College of Engineering & Technology Bangalore, India abasu@pqrsoftware.com
.
Abstract Benchmarking plays a critical role in evaluating the performance of systems that are ready for operation. However, with so many benchmarks available, and with absence of standardization, choosing the right benchmark is always not an easy task. Further interpretation of benchmarking result requires statistical analyses. The paper discusses the benchmarks available for different types of workloads and touches upon the challenges facing the performance analysts interested in benchmarking. Key words Computer Systems Performance Evaluation, Benchmarking, Synthetic Benchmarks, Kernel Benchmarks, Application Benchmarks. 1. Introduction Scientific evaluation of the performance of computer systems is needed during [1]: Vendor evaluation, and selecting the hardware and software configuration of any computer system prior to procurement Improvement of performance, reduction of cost etc. Of any computer system System Design when different implementation alternatives are being evaluated.
Modeling using Queuing Networks, Petri Nets [2][3][4] are used when designing a new system and for improving the performance of an existing system (when the system is not completely built). Measuring the performance by running some standard programs referred to as Benchmark Programs are required for comparison of systems that are ready for installation. With the evolution of different types of system architecture such as Cloud Computing, it has become more difficult to compare the performance of various computer systems simply by looking at their specifications. Tests are needed for comparison of different systems to ascertain the suitability of that system for executing the workload of a particular installation. In the area of Computer Systems Performance Evaluation, Benchmarking [2] is the act of running
References: [1] Domenico Ferrari, Computer Systems Performance Evaluation, Prentice Hall, Englewood Cliffs, NJ, 1978 [2] Raj Jain, The Art of Computer Systems Performance Analysis, John Wiley and Sons, 1991 [3] K S Trivedi, Probability and Statistics with Reliability, Queuing and Computer Science Applications, Prentice Hall, Englewood Cliffs, NJ, 1982 [4] Paul J. Fortier, Howard E. Michael, “Computer Systems Performance Evaluation and Prediction”, Elsvier Science (USA), 2003. [5] Curnow and B A Wichmann, “A Synthetic Benchmark”, Computer Journal, Vol.19, No.1, pp.43-49, 1976 [6] Reinhold P. Weicker, “A Synthetic systems programming benchmark”, Communications of ACM, Vol.27, No 10, Oct 1984,pp.10-13 [7] F. H. McMahon. The Livermore Fortran Kernels test of the Numerical Performance Range. In J. L. Martin, editor, Performance Evaluation of Supercomputers, pages 143--186. Elsevier Science B.V., NorthHolland, Amsterdam, 1988. [8]D. Bailey and John T Barton, “ The NAS Kernel Benchmark Programs”, NASA Ames Research Center, June 13, 1986 [9] J. J. Dongarra. Performance of various Computers using Standard Linear Equations Software in a Fortran Environment. Computer Science Dept. Technical Report CS-89-85, University of Tennessee, Knoxville, Tennessee, March 1990. [10] E. Anderson, Z. Bai, C. Bischof, J. Demmel, J. Dongarra, J. Du Croz, A. Greenbaum, S. Hammarling, A. McKenney, S. Ostrouchov, and D. Sorensen. LAPACK Users ' Guide. SIAM, Philadelphia, PA, 1992. [11] www.netlib.org/scalapack [12] M Berry et al., The PERFECT Club Benchmarks: Effective Performance Evaluation of Supercomputers. July 1994 [13] www.spec.org. [14] www.tpc.org. [15] www.eembc.org [16]www.bapco.com [17]www.cloudharmony.com