The Bayesian Sampler: Generic Bayesian Inference Causes Incoherence in Human Probability Judgments
Dr. Adam Sanborn
University of Warwick
Human probability judgments are systematically biased, in apparent tension with Bayesian models of cognition. But perhaps the brain does not represent probabilities explicitly, but approximates probabilistic calculations through a process of sampling, as used in computational probabilistic models in statistics. Naïve probability estimates can be obtained by calculating the relative frequency of an event within a sample, but these estimates tend to be extreme when the sample size is small. We propose instead that people use a generic prior to improve the accuracy of their probability estimates based on samples, and we call this model the Bayesian sampler. The Bayesian sampler trades off the coherence of probabilistic judgments for improved accuracy, and provides a single framework for explaining phenomena associated with diverse biases and heuristics such as conservatism and the conjunction fallacy. The approach turns out to provide a rational reinterpretation of “noise” in an important recent model of probability judgment, the probability theory plus noise model, making equivalent average predictions for simple events, conjunctions, and disjunctions. The Bayesian sampler does, however, make distinct predictions for conditional probabilities, and we show in a new experiment that this model better captures these judgments both qualitatively and quantitatively.
View the full paper here
Thursday, September 26, 4:15-5:30 PM
Greene Science Center 9th Floor Lecture Hall
All attendees must register using the signup link below in order to gain access to the Jerome L. Greene Science Center