In natural settings, we make decisions based on streams of partial and noisy information. Arguably, we summarize the perceived information into a probabilistic model of the world, which we can exploit to make decisions. This talk will explore such ‘mental models’ in the context of idealized tasks that can be carried out in the laboratory and modeled quantitatively. The starting point of the talk will be a sequential inference task that probes inference in changing environments, in humans. I will describe the task and an experimental finding, namely, that humans make use of fine differences in temporal statistics when making inferences. While our observations agrees qualitatively with an optimal inference model, the data exhibit biases. What is more, human responses, unlike those of the optimal model, are variable, and this behavioral variability is itself modulated during the inference task. In order to uncover the putative algorithmic framework employed by humans, I will go on to examine a family of models that break away from the optimal model in diverse ways. This investigation will suggest a picture in which humans carry out inference using noisy mental representations. More specifically, rather than representing a whole probability function, human subjects may manipulate probabilities using a (possibly modest) number of samples. The approach just outlines illustrates a range of possible computational structures of sub-optimal inference, but it lacks the appeal of a normative framework. If time permits, I will discuss recent ideas on a normative approach to human inference subject to internal ‘costs’ or ‘drives’, which can explain various biases. While different in its formulation, this approach shares conceptual commonalities with the rational inattention theory and other constrained optimization frameworks in cognitive science.