I study the problem of persuading a boundedly rational agent without controlling or knowing the piece of information she observes. Persuasion occurs by providing models whereby the persuader can communicate ways of interpreting observable signals. The key assumption is that the agent adopts the model that best fits what is observed, given her initial beliefs, and takes the action that maximizes her expected utility under the adopted model. I characterize the extent of belief manipulability in this setting and show that the agent may hold inconsistent beliefs across signal realizations — posterior beliefs across realizations do not average to the prior — because each signal may trigger the adoption of a different model. While persuasion can mislead the agent, the extent to which she is vulnerable to it is driven by her initial beliefs. Polarization is inevitable if agents with sufficiently different priors are exposed to the same conflicting models. I apply this framework to political polarization, conflict of interests in finance, lobbying, and self-persuasion.