Welcome back to the Economics of AI. In doing research, it is often useful to take a step back and look at the bigger picture. And our topic in this module is to take such a step back and look at some of the potential implications of transformative AI for humanity. What we will cover is speculative. Many developments of AI will turn out quite differently from what is currently predicted, what it is currently expected by the most sophisticated technologists, but it is still useful to look at the longer-term concerns that we're going to cover. In fact, we would be shirking our job as economists if we didn't. Let me give you two important reasons why. First, as an insurance strategy, we want to prepare for the potentially bad outcomes in which our social marginal utility is very high, and we need to first understand extreme scenarios like the potential for wholesale human replacement to prepare for it. Second, our models frequently deliver the clearest insight and are best at cutting through the fog, when we focus on extreme scenarios, and we oftentimes learn about the less extreme scenarios by first understanding the more extreme ones. For example, even if our concerns about wholesale human displacement in the labor market are false, we still learn valuable lessons that are insightful in a world in which only some types of human workers are displaced or if they are not fully displaced, but just experience stark declines in income. Or even if it takes much longer for true super intelligence to be developed than the median prediction of AI researchers, many of the societal effects of highly advanced intelligence may still be quite similar. In that spirit, I want to focus on three themes in our module on the economics of transformative AI. First, on an economy of non-human agents. Many forms of intelligent agents operate in our economy, for example, corporations, governments, and so on, but we have been blinded by the powerful doctrine of neoliberalism which puts rational, self-centered human individuals at the center of all action. I remember several seminars where students started, for example, by postulating that corporations have an objective function as opposed to simply pursuing the objective of the human owners and managers. They were immediately shot down. Well, in our first lesson, we will look at a world where there are non-human actors, such as corporations or advanced AI systems, and we will focus on four big themes: What describes the allocation of resources between humans and other intelligent economic agents? Does our economy only serve the interests of humans or also the interests of those other agents, and how could we tell? Does our economy even need humans to function? And, if machine intelligence advances without limits, will humans still be viable? Our second lesson will focus on hacking humans. Advanced AI systems are getting better and better at deconstructing how the human brain works and manipulating us by playing off different parts of the brain against each other. There's great danger in this if it is used for consumer manipulation or for political manipulation, but if it is used carefully, AI systems can also be good hackers and help us improve our lives. Third, we will cover the AI control problem. One of the ultimate challenges of advanced AI systems is to make sure that they do what we want them to do. If we try to control an AI system that is far more intelligent than us, there is a risk that we may be about as successful as a three-year-old who is attempting to control an adult. We may be easily outwitted. The AI control problem and the closely associated AI alignment problem looks at how to ensure that AI systems pursue our human objectives and our human interests. And there is a really important social science component associated with that because what our human objectives are is in part a question for the social sciences. What I will propose in this lesson is that we economists have many useful tools to contribute to this challenge, and I will provide you with a flavor of these.