From Control to Emergent Intelligence focuses on helping learners understand how generative AI behavior is shaped, guided, and extended, moving from surface-level interaction to a systems-level perspective. The course begins with how humans control models at inference time through prompting strategies and sampling parameters, then steps back to examine how models are shaped during training through reinforcement learning, fine-tuning, and feedback. Learners develop a clear mental distinction between intelligence that is baked into a model during training and intelligence that emerges at inference time through structure, reasoning, tools, and memory. This framing allows learners to see modern generative AI not as a static tool, but as a dynamic system whose behavior depends on both how it was trained and how it is used.

Modern Applications of Generative AI

Modern Applications of Generative AI

Instructor: Bobby Hodgkinson
Access provided by Vanderbilt University
Recommended experience
Recommended experience
Intermediate level
Individuals who are currently using or want to use AI in their personal or professional lives.
Recommended experience
Recommended experience
Intermediate level
Individuals who are currently using or want to use AI in their personal or professional lives.
Skills you'll gain
Tools you'll learn
Details to know

Add to your LinkedIn profile
7 assignments
May 2026
See how employees at top companies are mastering in-demand skills

There are 7 modules in this course
Modern Applications of Generative AI focuses on helping learners understand how generative AI behavior is shaped, guided, and extended, moving from surface-level interaction to a systems-level perspective. The course begins with how humans control models at inference time through prompting strategies and sampling parameters, then steps back to examine how models are shaped during training through reinforcement learning, fine-tuning, and feedback. Learners develop a clear mental distinction between intelligence that is baked into a model during training and intelligence that emerges at inference time through structure, reasoning, tools, and memory. This framing allows learners to see modern generative AI not as a static tool, but as a dynamic system whose behavior depends on both how it was trained and how it is used. As the course progresses, learners move beyond single prompts to structured reasoning, model comparison, and evaluation across different architectures and ecosystems, including open-source and mixture-of-experts models. They then explore how tools, memory, and context persistence allow AI systems to operate across time, enabling action-oriented workflows rather than isolated responses. The course concludes with real-world applications across domains such as coding, business, accessibility, and creative work, paired with individual-level ethical reflection on what it means to work alongside AI systems. By the end of the course learners understand not only how to use generative AI effectively today, but how the combination of control, feedback, reasoning, evaluation, and external capabilities gives rise to more autonomous behavior, setting the foundation for agents and more advanced systems.
What's included
2 videos
2 videos•Total 26 minutes
- Course Overview•9 minutes
- Generative AI Refresher •16 minutes
This week emphasizes that prompting and sampling guide behavior without changing the underlying model. By briefly revisiting earlier concepts such as transformers and multimodal generative architectures, learners place prompting within the broader AI landscape while staying focused on practical control. The week closes by raising a key question: if users can shape behavior so effectively at inference time, how does the model learn what “good” behavior is in the first place? That question leads directly into the next week’s exploration of training, reinforcement learning, and fine-tuning.
What's included
4 videos2 readings2 assignments
4 videos•Total 27 minutes
- Prompting and Control Parameters Intro•2 minutes
- Inference-Time Control - Steering Without Retraining•11 minutes
- Why Temperature Changes Everything•10 minutes
- If Prompts Shape Behavior, Who Taught the Model What “Good” Is?•5 minutes
2 readings•Total 20 minutes
- The Invisible Rails: How Prompts Guide AI Behavior•10 minutes
- Riding the Probability Wave: From Determinism to Distributions•10 minutes
2 assignments•Total 60 minutes
- Controlling Beyond the Prompt - Directing AI’s Creativity•30 minutes
- What Did You Actually Control?•30 minutes
The week centers on reinforcement learning from human feedback (RLHF) and evaluator models as mechanisms for encoding preferences, alignment, and style into generative systems. Learners examine how feedback shapes model behavior, why RLHF has been so effective, and why it can also contribute to issues such as hallucinations and reward misalignment. The week closes by shifting attention back to inference time, asking how structured prompting and additional compute can enable models to reason, revise, and refine outputs without retraining, setting the stage for the study of reasoning scaffolds and chain-of-thought in the following week.
What's included
5 videos3 readings1 assignment
5 videos•Total 35 minutes
- Training and Alignment Intro•2 minutes
- Two Kinds of Intelligence - Training vs. Inference•8 minutes
- Fine-Tuning as Behavioral Sculpting•8 minutes
- Shaping Intelligence•12 minutes
- Why We Don’t Retrain for Every Thought•5 minutes
3 readings•Total 30 minutes
- The Brilliant Mimic: How AI Learns Without a Glimmer of Understanding•10 minutes
- The AI Finishing School: How Human Preference Teaches AI to Behave•10 minutes
- Case Study: When Good Feedback Leads to Bad AI•10 minutes
1 assignment•Total 30 minutes
- What Would You Train vs. What Would You Prompt?•30 minutes
This week focuses on inference-time compute and reasoning scaffolds such as chain-of-thought and step-by-step prompting, highlighting how large context windows allow models to “think on the page.” Rather than producing a single answer, models can fill the context window with intermediate steps, enabling feedback into their own reasoning and improving accuracy on complex tasks. The week emphasizes that this process does not involve learning or parameter updates. Instead, reasoning emerges from structure, additional context, and the ability to revisit earlier steps within the same prompt. Learners explore how self-critique, revision, and iterative prompting take advantage of large context windows to refine outputs without retraining. The week closes by shifting from individual reasoning strategies to broader comparison, preparing learners to examine how different models reason, specialize, and perform across tasks, which leads directly into the study of open-source models, mixture-of-experts architectures, and systematic evaluation in the following week.
What's included
3 videos2 readings1 assignment
3 videos•Total 16 minutes
- Inference-time compute and Reasoning Intro•2 minutes
- Thinking on the Page - Reasoning Without Learning•11 minutes
- From Answers to Processes•4 minutes
2 readings•Total 20 minutes
- The AI's Short-Term Memory: How Context Windows Power On-the-Fly Reasoning•10 minutes
- The Great Divide: Why Some AIs "Think" Better Than Others•10 minutes
1 assignment•Total 30 minutes
- One Problem, Three Reasoning Paths•30 minutes
This week introduces the open-source model ecosystem and Mixture of Experts architectures, using models such as Mistral to illustrate how specialization and routing can improve performance without relying on a single monolithic model. Learners connect these ideas to earlier discussions of fine-tuning, seeing how different approaches shape behavior and capability in complementary ways. The week then shifts to evaluation and benchmarking as essential practices for understanding model strengths, limitations, and tradeoffs. Learners examine the history of benchmarking to see how rapidly frontier models have advanced, from outperforming grade-school benchmarks to surpassing expert-level performance when paired with tools. Concepts such as alignment, alignment drift, and reward hacking are introduced through examples, including Goodhart’s Law, to show why evaluation must evolve alongside model capability. The week closes by highlighting practical considerations around data ownership, IP boundaries, and deployment constraints—particularly in open-source settings—setting up the next week’s focus on tool use, memory, and systems that operate across time.
What's included
4 videos3 readings1 assignment
4 videos•Total 26 minutes
- Ecosystem and Evaluation Intro•2 minutes
- Why One Model Is Never Enough•8 minutes
- Open Source, Distillation, and Specialization•8 minutes
- From Benchmarks to Behavior•8 minutes
3 readings•Total 30 minutes
- The Shockwave from the East: How DeepSeek Rewrote the Rules of AI Dominance•10 minutes
- The Finish Line is a Mirage: When AI Benchmarks Stop Mattering•10 minutes
- The AI is Just the Beginning: Ownership, Openness, and the Realities of Deployment•10 minutes
1 assignment•Total 30 minutes
- One Task, Multiple Models•30 minutes
This week explores tool use, showing how models invoke calculators, search, APIs, and retrieval-augmented generation systems to access external capabilities. Tool use marks a key transition from passive reasoning to action-oriented behavior, where models no longer operate solely within their training data or context window. The week also introduces memory and context persistence, examining how short-term context, long-term storage, and summarization enable systems to operate across multiple interactions rather than isolated prompts. Learners explore basic evaluation heuristics that help monitor reliability as systems grow more complex. Together, tools and memory allow AI systems to maintain continuity over time, setting the stage for real-world applications and ethical considerations in the following week.
What's included
3 videos2 readings1 assignment
3 videos•Total 21 minutes
- Tools, Memory and Persistence Intro•2 minutes
- From Answers to Actions - Why Tools Matter•11 minutes
- Memory, Context, and Persistence Across Time•8 minutes
2 readings•Total 20 minutes
- The Brain and the Hammer: Why AI Tools Are Not AI Intelligence•10 minutes
- The Unblinking Memory: Why Continuity in AI Creates Responsibility•10 minutes
1 assignment•Total 30 minutes
- One Task, With and Without Tools•30 minutes
This week surveys modern applications across domains such as code generation, business workflows, accessibility enhancements, and creative media (music, speech, image, video), emphasizing how AI systems function as productivity multipliers rather than replacements. The week also introduces ethical reflection at the individual level, focusing on what it means to work alongside AI systems in daily practice. Learners consider tradeoffs related to cognition, autonomy, and reliance and discussions of the AI productivity paradox. “Every time you interact with an AI, realize you’re giving something up in exchange.” The course closes by inviting learners to reflect on what AI can do in their chosen field today, forming the basis for the Course 2 capstone and setting up Course 3, where attention shifts from current capabilities to the emergence of agents and the implications of more autonomous systems.
What's included
4 videos2 readings1 assignment
4 videos•Total 24 minutes
- Applications and Ethics Intro•2 minutes
- From Capability to Practice•4 minutes
- The Hidden Costs of Convenience•16 minutes
- Wrap Up•2 minutes
2 readings•Total 20 minutes
- The Universal Intern: How AI is Becoming a Productivity Multiplier in Every Field•10 minutes
- AI in the Wild: Case Studies on What Changes... and What Doesn't•10 minutes
1 assignment•Total 30 minutes
- What Will You Delegate - and What Will You Keep?•30 minutes
Build toward a degree
This course is part of the following degree program(s) offered by University of Colorado Boulder. If you are admitted and enroll, your completed coursework may count toward your degree learning and your progress can transfer with you.¹
Build toward a degree
This course is part of the following degree program(s) offered by University of Colorado Boulder. If you are admitted and enroll, your completed coursework may count toward your degree learning and your progress can transfer with you.¹
University of Colorado Boulder
Master of Science in Computer Science
Degree · 24 months
University of Colorado Boulder
Graduate Certificate in Artificial Intelligence
Degree
¹Successful application and enrollment are required. Eligibility requirements apply. Each institution determines the number of credits recognized by completing this content that may count towards degree requirements, considering any existing credits you may have. Click on a specific course for more information.
Instructor

Offered by

Offered by

CU Boulder is a dynamic community of scholars and learners on one of the most spectacular college campuses in the country. As one of 34 U.S. public institutions in the prestigious Association of American Universities (AAU), we have a proud tradition of academic excellence, with five Nobel laureates and more than 50 members of prestigious academic academies.
Why people choose Coursera for their career

Felipe M.

Jennifer J.

Larry W.
