# Understanding Bayes' Theorem

Bayes’ Theorem is perhaps the most important theorem in the field of mathematical statistics and probability theory. For this reason, the theorem finds its use very often in the field of data science.

Bayes’ theorem, named after 18th-century British mathematician Thomas Bayes, is a mathematical formula for determining conditional probabilities. This theorem has enormous importance in the field of data science. For example, one of many applications of Bayes’ theorem is the Bayesian inference, a particular approach to statistical inference. Bayesian inference is a method in which Bayes’ theorem is used to update the probability for a hypothesis as more evidence or information becomes available.

The formula of Bayes’ Theorem involves the posterior probability P(H | E) as the product of the probability of hypothesis P(E | H), multiplied by th probability of the hypothesis P(H) and divided by the probability of the evidence P(E).

Let us now understand each term of the Bayes’ Theorem formula in detail –

• P(H | E) – This is referred to as the posterior probability. Posteriori basically means deriving theory out of given evidence. It denotes the conditional probability of H (hypothesis), given the evidence E.
• P(E | H) – This component of our Bayes’ Theorem denotes the likelihood. It is the conditional probability of the occurrence of the evidence, given the hypothesis. It calculates the probability of the evidence, considering that the assumed hypothesis holds true.
• P(H) – This is referred to as the prior probability. It denotes the original probability of the hypothesis H being true before the implementation of Bayes’ Theorem. That is, this probability is without the involvement of the data or the evidence.
• P(E) – This is the probability of the occurrence of evidence regardless of the hypothesis.