The term automated decision making refers to many kinds of systems that vary in how they are designed and used. The normative questions raised by automated decision making—indeed, the very legitimacy of automation—depend critically on these details. In this paper, we focus on one prominent category of automated decision making with the aim of sharpening the policy debate. Specifically, we analyze systems where (1) machine learning is employed to automate the process of developing a decision-making rule which is (2) then applied to individuals to (3) make predictions about some future outcome. We call this form of algorithmic decision making “predictive optimization” because the decision-making rules at issue in this case have been explicitly optimized, via automation, with the narrow goal of maximizing the accuracy with which they predict some future outcome. An example of predictive optimization is making college admissions decisions by using historical data to make predictions about which applicants are most likely to succeed according to some measure, perhaps GPA or job placement. Predictive optimization contrasts with efforts to simply automate the judgments of human decision makers, such as admissions officers. It is also distinct from manual approaches to developing admissions policies that may involve a more deliberative process that can incorporate a range of possible considerations and goals. We focus on predictive optimization because it introduces a distinct set of concerns that warrant separate and careful treatment. One of the main contributions of our paper is to trace the provenance of these concerns back to the canonical model development process that is used in predictive optimization. First, it can be very difficult, and sometimes impossible, to actually develop a decision-making process based on predictive optimization that even meets its stated goals. Making a problem amenable to predictive optimization is rarely obvious. The developer needs to decide what should be predicted, how this prediction should inform a decision, and how this decision helps advance the goals of the decision maker. These challenges are rarely addressed effectively. In addition, there can be serious barriers in assembling the necessary data to make accurate and reliable predictions, ensuring equally accurate predictions across the population, and achieving acceptable levels of accuracy given fundamental limits to prediction. Its adoption and use can also raise a number of additional concerns around contestability: the rules induced from data can defy meaningful inspection, limiting people’s ability to challenge them.Our list of critiques is deliberately limited, and scoped to enable us to challenge these systems on their own terms (for instance, we avoid systemic critiques beyond the purview of an individual decision maker). Through this lens, we critically examine developers' claims, identify fundamental limitations of the predictive paradigm that design changes cannot address, and tie these limitations to the risk of harm to individuals. As a result, we hope that our objections are legible to the technologists and managers who develop and deploy these systems. To support our arguments and provide concrete examples, we assembled a large corpus of news articles, research papers, and datasets, identifying 57 applications of predictive optimization to real-world problems. Of these, we selected eight particularly consequential applications—predicting pre-trial risk, child maltreatment, job performance, school dropout, loan default, self-harm, morbidity, and risk of insurance lapse—and we show that the specific set of challenges described above recur across each of them. Our findings serve as a toolkit for scholars and advocates to challenge unwarranted claims and determine if it is reasonable to even attempt to adopt predictive optimization for any given problem. To aid this task, we provide a checklist of around 30 questions that can help challenge the legitimacy of a predictive optimization algorithm. Our findings also help to explain past controversies and failures—why systems based on predictive optimization were deployed in various real-world domains but were consequently contested and often abandoned. Ultimately, we argue that predictive optimization applications should be considered illegitimate unless affirmatively demonstrated otherwise, and we lay out the specific hurdles that such a demonstration must clear