Bayesian theory is increasingly being adopted by the data scientists and analysts across the world. Most of the times the data set available or the information is incomplete. To deal with this realm of inductive logic, usage of probability theory becomes essential. As per the new perceptions, probability theory today is recognized as a valid principle of logic that is used for drawing inferences related to hypothesis of interest. E.T. Jaynes in the late 20th century, shared the view of “Probability theory as logic”. Today this is commonly called Bayesian probability theory in recognition with the work done in the late 18th century by an English clergyman and mathematician Thomas Bayes. (Gregory, Phil;, 2010)
Bayesian methodology is commonly employed to judge the relative validity of hypotheses in the face of noisy, sparse, or uncertain data, or to adjust the parameters of a particular model. (Olshausen, Bruno A;, 2004) The technique today is finding its relevance and creating a revolution in the fields ranging from genetics to marketing. It is one of the highly applied alternatives that have surpassed the previously introduced methodologies like NHST, p value and confidence intervals. Bayesian methods are useful in estimation of parameteric values in case of nonlinear hierarchical models that are flexibly used to the specifics of research and for application purpose. Thus, it is considered that this statistical methodology has opened new doors to extensive modelling practices that were previously inaccessible. (Kruschke, Aguinis, & Joo, 2012) As stated in an article in Economist, essence of Bayesian analysis lies in providing a mathematical explanation for how much one should change on their existing belief given the new set of information. In other words it helps a data scientist combine new set of data with the existing knowledge. (2000)
Statistically speaking, Bayesian methodology is based on conditional possibilities