We study how individuals update their beliefs in the presence of competing data-generating processes, or models, that could explain observed data. Through experiments, we identify the weights participants assign to different models and find that the most common updating rule gives full weight to the model that best fits the data. While some participants assign positive weights to multiple models—consistent with Bayesian updating—they often do so in a systematically biased manner. Moreover, these biases in model weighting frequently lead participants to become more certain about a state regardless of the data, violating a core property of Bayesian updating.