Rewards induce learning (positive reinforcement), approach behavior, economic decisions and positive emotions (pleasure, desire). We investigate basic neuronal reward signals during learning and decision-making, using behavioral and neurophysiological methods. Certainty equivalents from behavioral choices suggest that monkeys are risk-seeking with small rewards and risk neutral and then risk avoiders with larger rewards (juice volumes of 0.05-1.2 ml). The animals’ choices are meaningful in satisfying first, second and third order stochastic dominance. The reward prediction error signal of dopamine neurons codes subjective value as common currency derived from different rewards, temporal discounting and risk. Neuronal satisfaction of first and second order stochastic dominance suggests meaningful processing of value under risk and incorporation of risk into subjective value. Assessment of economic utility goes one step further and determines subjective value as a mathematical function of objective value (of liquid reward). Our monkeys show initially convex and then concave utility functions with increasing rewards, confirming the risk attitudes derived from certainty equivalents in direct choices. The dopamine signal follows closely the nonlinear utility function and thus codes a prediction error in economic utility. These data unite concepts from animal learning theory (prediction error) and economic decision theory (utility) at the level of single reward neurons.