Bayesian methods for meta-analysis in the presence of publication bias

AgencyNational Science Foundation
PanelMethods, Measurement, and Statistics
LocationUniversity of California, Irvine
Start dateSeptember 2015
End dateAugust 2018
Budget$ 260,000.00
Agency code1534472
LeadJoachim Vandekerckhove
Project text pn-1534472.pdf

Abstract

The differential rate of publishing between positive and negative results, which has been called publication bias, is of increasing concern in the social and behavioral sciences. This research project will develop a new approach to meta-analysis that explicitly takes into account the possibility of a biased publication process. Meta-analysis has been instrumental in interpreting the claims made in the academic literature. However, academic journals, especially in the social and behavioral sciences, seem to strongly prefer manuscripts that posit the existence of an effect rather than non-significant outcomes. This hinders classical meta-analysis methods because the aggregate of a biased set of empirical results will be biased as well. The new approach will allow for better aggregation of published results and will provide a more accurate view of the effect of various experimental manipulations and treatments. Software will be developed and published that implements this approach for a variety of situations.

This research project will develop a new approach to meta-analysis called "statistical mitigation" that combines behavioral models with state-of-the-art statistical methods. The approach will be based on a Bayesian model averaging technique in which effect size estimates are computed using a set of plausible selection models and averaging across these selection models. With this approach, it will be possible to isolate the signal of true effects within the noise of measurement error. The investigator will test the method under various circumstances, compare the new approach to existing methods for inference in the presence of publication bias, and perform simulations to assess the efficiency of the method. With a single approach to meta-analysis, researchers will be able to account for the possibility of publication bias, confirm or disconfirm null and non-null hypotheses, and do effect size estimation.

Publications

Dutilh, G., Annis, J., Brown, S., Cassey, P., Evans, N., Grasman, R., Hawkins, G., Heathcote, A., Holmes, W., Krypotos, A., Kupitz, C., Leite, F., Lerche, V., Lin, Y., Logan, G., Palmeri, T., Starns, J., Trueblood, J., van Maanen, L., van Ravenzwaaij, D., Vandekerckhove, J., Visser, I., Voss, A., White, C., Wiecki, T., Rieskamp, J., & Donkin, C. (2019). The quality of response time data inference: A blinded, collaborative approach to the validity of cognitive models. Psychonomic Bulletin & Review, 26, 1051–1069.
Mistry, P., Pothos, E., Vandekerckhove, J., & Trueblood, J. (2018). A quantum probability account of individual differences in causal reasoning. Journal of Mathematical Psychology, 87, 76–97.
Zwaan, R., Etz, A., Lucas, R., & Donnellan, M. (2018). Improving social and behavioral science by making replication mainstream: A response to commentaries. Behavioral and Brain Sciences, 41, e157.
Etz, A., & Vandekerckhove, J. (2018). Introduction to Bayesian inference for psychology. Psychonomic Bulletin & Review, 25, 5–34.
Matzke, D., Boehm, U., & Vandekerckhove, J. (2018). Bayesian Inference in Psychology, Part III: Bayesian parameter estimation in nonstandard models. Psychonomic Bulletin & Review, 25, 77–101.
Baribault, B., Donkin, C., Little, D., Trueblood, J., Oravecz, Z., van Ravenzwaaij, D., White, C., De Boeck, P., & Vandekerckhove, J. (2018). Metastudies for robust tests of theory. Proceedings of the National Academy of Sciences, 115, 2607–2612.
Vandekerckhove, J., Rouder, J., & Kruschke, J. (2018). Editorial: Bayesian methods for advancing psychological science. Psychonomic Bulletin & Review, 25, 1–4.
Etz, A., Haaf, J., Rouder, J., & Vandekerckhove, J. (2018). Bayesian inference and testing any hypothesis you can specify. Advances in Methods and Practices in Psychological Science, 1, 281–295.
Zwaan, R., Etz, A., Lucas, R., & Donnellan, M. (2018). Making replication mainstream. Behavioral and Brain Sciences, 41, e120.
Ly, A., Raj, A., Etz, A., Marsman, M., & Wagenmakers, E. (2018). Bayesian reanalyses from summary statistics: A guide for academic consumers. Advances in Methods and Practices in Psychological Science, 1, 367–374.
Etz, A. (2018). Introduction to the concept of likelihood and its applications. Advances in Methods and Practices in Psychological Science, 1, 60–69.
Etz, A., Gronau, Q., Dablander, F., Edelsbrunner, P., & Baribault, B. (2018). How to become a Bayesian in eight easy steps: An annotated reading list. Psychonomic Bulletin & Review, 25, 219–234.
Ly, A., Etz, A., Marsman, M., & Wagenmakers, E. (2018). Replication Bayes factors from evidence updating. Behavior Research Methods, 51, 2498–2508.
Etz, A., & Wagenmakers, E. (2017). J. B. S. Haldane's contribution to the Bayes factor hypothesis test. Statistical Science, 32, 313–329.
Lakens, D., & Etz, A. (2017). Too true to be bad: When sets of studies with significant and non-significant findings are probably true. Social Psychological and Personality Science, 8, 875–881.