The term ‘algorithmic governance’ has two meanings in the policy debate and existing literature: (1) the role of algorithms in governing our lives, and (2) the governance of the algorithms. Algorithms govern our lives both explicitly and implicitly. Explicitly, the public sector and governance institutions use algorithms, often outsourced to contractors, to fulfill their missions, such as allocating policing resources and determining prison sentencing. Implicitly, they govern many aspects of our lives, such as the shape of discourse in our digital public sphere, qualifications for loans, insurance premiums, employment eligibility, movie recommendations, the routes we drive to work, airline fares, college admissions, and more. With our lives increasingly shaped by digital platforms of social media and search engines, these opaque private-sector algorithms may govern our lives more than the laws of our governments (MacKinnon 2013). The harms of these governing algorithms in the context of public sector and private sector use have been well documented (e.g. O’Neil 2016, Noble 2018). These algorithms, lacking transparency and legitimacy, sometimes violate existing laws. This presents a situation where we must think through how to extend governance to be sure laws are maintained when algorithms are used, i.e. the second meaning of algorithmic governance. This paper attempts to increase our understanding of how to govern algorithms by exploring incentives and effects of different policies on how algorithms and AI are developed and deployed. Several countries have begun to introduce regulations of algorithms and AI. However, these laws are still in their fledgling state and may have unintended consequences. Such policies face critical challenges, including information and expertise asymmetry, the dynamic nature of the algorithms and their use, and sheer volume. While there has been a great deal of helpful legal scholarship exploring this issue, this paper provides necessary economic analysis clarifying the incentives created for different parties by the policy options by evaluating recent and proposed policies on several criteria, including (1) incentives for firms and other entities to comply, (2) effectiveness in governance (including monitoring and enforcement), (3) minimization of obstacles to innovation (by potential entrants as well as incumbents), and (4) balance of Type I vs Type II errors. We use theoretic modeling of games between platforms and regulators, between platforms and users, between algorithm developers and implementers, and multiplayer games between users to identify incentives and outcomes under different proposed governance structures and in different contexts. We will test the results of these models with empirical evidence from trials of such policies where they exist, and similar regulation of other industries where they do not. We will evaluate several major proposed policies, including aspects of the EU Artificial Intelligence Act, the US Algorithmic Accountability Act, and the proposed FDA model of pre-market testing and evaluation. We also introduce and evaluate an alternative model based on accounting audits of corporate financial statements for legal compliance. The paper contributes a better understanding of the tradeoffs between regulatory policy options based on economic analysis