This project aims to understand how we update beliefs in the presence of competing models. It is common to be in situations where different models could explain the data. How do people learn from data in these situations? Do they select only one model to update beliefs, or do they form posteriors by weighting the predictions of the competing models? Our experimental design allows us to isolate the weights subjects place on different models and to compare them with the theoretical predictions. We find that a substantial share of people solely adopt the model that explains the data best, in line with Maximum likelihood selection. However, Bayesian updating would require weighting models: we find that many subjects do so, but with distorted weights compared to the Bayesian benchmark. Shedding light on this topic is essential to uncover whether people can keep multiple models in mind and avoid extreme conclusions due to the adoption of a single model.