In the evolution of any scientific discipline there is a period in which attempts are made to develop mathematical theories in order to account for and explain the observations generated by the phenomena with which the discipline is concerned. During this period, the qualitative and verbal theories are replaced, or supplemented, by quantitative and mathematical theories which express, in the form of some type of equation or equations, a postulated mechanism or model that can generate a theoretical set of observations. The theoretical and experimental observations can then be compared in order to see if the model is a reasonable one, i.e., if it is capable of accounting for the experimental observation obtained under the conditions stipulated by the data analysis.
Analysis of data is a process of inspecting, cleaning, transforming, and modeling data with the goal of discovering useful information, suggesting conclusions, and supporting decision making. Data analysis has multiple facets and approaches, encompassing diverse techniques under a variety of names, in different business, science, and social science domains. When analyzing the data , we can select one of two approaches to the study of the phenomenon concerned. These two approaches, which are termed deterministic and stochastic (or probabilistic), reflect the casual nature of the postulated mechanism (or model) which we express in mathematical form. It is not of great interest to ask if the phenomenon with which we are concerned is deterministic or stochastic. We are concerned with the analysis of a variable or stochastic model, the investigation of its properties, and its ability to account for experimental observations. The analysis of data or model building involve huge computational complexities. To reduce the burden of computational complexities statistical packages are developed. The term s data analysis refers to the mathematical abstraction, pattern, or