What Is Bayes' Theorem?
Bayes' theorem is the mathematical rule for updating beliefs in the light of new evidence. Formulated by Reverend Thomas Bayes and published posthumously in 1763, it provides a precise formula: the posterior probability of a hypothesis H given evidence E equals the likelihood of observing E if H is true, multiplied by the prior probability of H, divided by the total probability of observing E.
The Base Rate Fallacy
One of the most important cognitive biases revealed by Bayes' theorem is the base rate fallacy — our tendency to ignore how common or rare something is when interpreting evidence. A medical test with 95% accuracy sounds highly reliable, but if the disease affects only 1% of the population, a positive result still has roughly an 84% chance of being a false positive. This counterintuitive result has profound implications for medical screening, criminal justice, and any domain where we test for rare events.
Natural Frequencies
The probability tree and dot display in this simulator use natural frequencies — showing counts out of 1000 rather than abstract percentages. Research by Gerd Gigerenzer demonstrates that people understand Bayesian reasoning far better when information is presented this way. Instead of juggling conditional probabilities, you can simply count: out of 1000 people, about 10 have the disease, roughly 9.5 test positive (true positives), and about 50 healthy people also test positive (false positives). So only 9.5 out of ~60 positive tests are genuine.
Interactive Exploration
Use the sliders to see how the posterior changes with different priors, sensitivities, and specificities. Notice how dramatically the posterior drops when the prior (base rate) is very low — this is the mathematical basis of the base rate fallacy. The bar chart at the bottom shows the magnitude of the Bayesian update: how much a single piece of evidence shifts your beliefs from prior to posterior.