Recent Award

Revamped Bayesian Inference

This research project will make Bayesian statistical computation much faster. Bayesian methods have not gained much traction in the social sciences, in part because the approach is so computationally intensive. Many researchers who could usefully apply these techniques choose not to do so because the analysis is too costly. This project will improve the computational efficiency of Bayesian methods by harnessing a critical theorem that has long been overlooked by statisticians but proven by one of the twentieth century's greatest mathematicians. Some pieces will need to be put into place for this approach to work on today's computers. However, once implemented and with a little bit of additional training, scientists will be able to apply state-of-the-art statistical methods regardless of the amount of data. The beauty of the theorem underlying the modified calculation is that it is almost universally applicable and can be leveraged by all scientists. By lowering and flattening the cost function, this project will have a broad and deep impact in the social sciences and elsewhere. The results of this research will facilitate the analysis of large data sets that recently have become prevalent across scientific fields. Graduate students will be involved in the conduct of the project and trained in the use of this approach. The investigators will implement their findings in an existing free and open-source software program.

This research project will leverage the Kolmogorov Superposition Theorem (KST) to increase the speed of computations for projects using Bayesian methods. Most statistical models of scientific phenomena ask: What was the probability, under the model, of observing this collection of data and how would that probability change depending on the values of unknown quantities that are to be estimated? To answer those questions, computers calculate that probability for many possible values of the unknowns and determine what ranges of the estimates are more probable than others. Each observation in a data set affects this probability, so when data sets are large, the calculation is slow and often infeasible. However, the KST demonstrates that there is an alternative way to exactly perform the calculation using only the addition of mathematical functions that each take in just one unknown and output one link in the chain. The number of links in the alternative chain depends only on the number of unknowns, rather than the number of observations in the data set, and thus the calculation can be dramatically accelerated in large data sets. To use this technique, scientists will need to think a little differently about how they build models and estimate the model's unknown quantities, but the investigators will provide a coherent theoretical framework and open-source software tools that will make this process not only faster, but simpler.

Principal Investigator: 

Benjamin Goodrich

Associate Research Scholar; Lecturer in the Department of Political Science

Andrew Gelman

Higgins Professor of Statistics and Professor of Political Science

Home Department: 


Wednesday, September 1, 2021 to Saturday, August 31, 2024

Research Category: 




Don't want to miss our interesting news and updates! Make sure to join our newsletter list.

* indicates required

Contact us

For general questions about ISERP programs, services, and events.

Working Papers Bulletin Sign-up

Sign up here to receive our Working Papers Bulletin, featuring work from researchers across all of the social science departments. To submit your own working paper for our next bulletin, please upload it here, or send it to
* indicates required