The capacity for cognitive control, one of the defining characteristics of human cognition, is also remarkably limited. Typically, people cannot engage in more than a few — and sometimes only a single — control-demanding task at once. Limited capacity was a defining element in the earliest conceptualizations of cognitive control, it remains one of the most widely accepted axioms of cognitive psychology, and is even the basis for some laws (e.g., against the use of mobile devices while driving). It also plays a central role in normative (e.g., “bounded rationality”) models of cognitive control, which assume that the capacity limitation imposes an opportunity cost on the allocation of control, and that control policies are chosen so as to optimize payoff relative to this cost (e.g., the Expected Value of Control theory). Remarkably, however, the reason that the capacity for control is limited remains a mystery. Structural and/or metabolic constraints are commonly, if tacitly, assumed reasons. However, these seem unlikely, given the vast resources available to the human brain. In this talk, I will present an alternative account, that offers a computational explanation for the capacity constraints on cognitive control. This account suggests that constraints on controlled processing reflect an inherent tradeoff between a bias in learning for the development of efficient, and generalizable representations, and the performance efficiency afforded by dedicated representations that support parallel processing. I will describe theoretical results (involving simulations and analysis) in support of these ideas, and the beginnings of an empirical line of research designed to test them.