Weighting Competing Models

draft available upon request
(joint with F.H. Schneider)

We study belief updating in the presence of competing data-generating processes, or models, that could explain observed data. Using a theory-driven experiment, we identify the weights individuals assign to different models. We find that the most prevalent updating rule assigns full weight to the model that best fits the data, reflecting overinference about the models. While some participants assign positive weights to multiple models—consistent with Bayesian updating—they often do so in a biased manner, reflecting underinference about the models. We also find that individuals consistently apply these distinct updating rules across updating tasks. Finally, we show that biases in model weighting often lead participants to become more certain about a state regardless of the observed data, violating a core property of Bayesian updating. These findings offer an empirical basis for the theoretical literature that formalizes narratives as models.