What Is a Prompt Pattern?

Written by Coursera Staff • Updated on

Learn about prompt patterns as a tool within prompt engineering to create reusable prompts when interacting with LLMs.

[Featured Image] A student wearing headphones and working on a laptop learns what a prompt pattern is during an online prompt engineering class.

Prompt patterns are an aspect of prompt engineering designed to elicit your desired response from a large language model (LLM) like ChatGPT. Prompt patterns allow for the advent of prompt engineering, a growing field in interacting with and getting results from LLMs. Any text input to an LLM is a prompt, and the LLM produces a response based on how the LLM is trained. However, a prompt pattern is a strategic positioning of language and intent in the prompt to guide the LLM to more accurate results. 

Discover more about prompt patterns, including the different types, prompt pattern examples, their uses, who uses them, their pros and cons, and how you can get started in prompting. 

Types of prompt patterns

Prompt patterns take any shape needed by the user of an LLM. Some patterns to solve typical LLM problems include:

  • Input semantics

  • Output customization

  • Error identification

  • Prompt improvement

  • Context control

Let’s look at each type with a prompt pattern example using ChatGPT. 

1. Input semantics

The input semantics is a type of prompt pattern that involves how the LLM understands and translates natural language. This pattern defines natural language terms into language the LLM better understands. An example of this prompt pattern uses variables for certain terms to help the LLM understand the logic of the argument and help you express your questions in specific terms. 

Below is an example of an input semantics prompt pattern called the meta language creation pattern:

  • Prompt: Whenever I use p → q, I am referring to a logical statement that if p happens then q happens. For example, if the logic is p → q, and p = It is thundering and q = It is lightning, what does that statement mean?

  • Prompt pattern: Whenever I use “blank variable” I am referring to “the definition of this variable.” Answer this question using “blank variable.”

2. Output customization

With output customization, you use your prompt pattern to have the LLM focus its output response in a specific way you desire. This allows you to specify the kind of voice, template, graph, or any other desired output for the LLM to give. This involves using clear instructions on how you would like the LLM to answer your question.

Below is an example of an output customization prompt pattern called the persona pattern: 

  • Prompt: You are a Python programmer. Only answer my question using Python code. Can you write a Python script that organizes all files into one folder labeled "Everything" on the desktop of a computer using a MacOS with detailed instructions on what each line of code is doing?

  • Prompt pattern: You are “role.” Only answer my questions using/doing “what role uses or how they perform their role.” (Input question or task). 

3. Error identification

LLMs sometimes produce hallucinations which are factual errors, mathematical errors, or repetitive undesired outcomes. A prompt pattern can help the LLM review its steps and fact-check itself. 

Below is an example of an error identification prompt pattern called the fact checklist pattern:

  • Prompt: Whenever you output an answer, generate a set of facts the answer depends on that need to be fact-checked. List this set of facts with citations after the output. How do neural networks work? Only include facts about the working of neural networks. 

  • Prompt pattern: Whenever you output an answer, generate a set of facts the answer depends on that need to be fact-checked. List this set of facts with citations after the output. “User question”. Only include facts about “user subject.”

4. Prompt improvement

A prompt improvement pattern builds in ways for the LLM to recommend improvements to the prompt to produce more specific answers. This can include the production of further questions, a more detailed and specific question, and the rewording of the original question to a form that produces an answer. 

Below is an example of a prompt improvement pattern called the alternative approaches pattern:

  • Prompt: I want to start a business. Every time I ask about the business's logistics suggest an alternative question for me to ask about the logistics of the business. Then, show me the advantages and disadvantages of the logistics with a list of cost, efficiency, and time of both my original question and your alternative question. After, ask me which approach I want to continue using. 

  • Prompt pattern: Whenever I ask about “this subject” do this. Also, ask alternative questions to this “this subject.” Then, show me the advantages and disadvantages of “each result” about my original question and your alternative question. After, ask me which approach I want to continue using. 

5. Context control

Context control uses a prompt pattern that defines the context within which the LLM outputs a response. This pattern tells it what to consider, within certain bounds, and what to ignore to generate a specific answer. These can include any kind of information within the context of your question. 

Below is an example of a context control prompt pattern called the context manager pattern:

  • Prompt: Please recommend ideas for a social media post about a software business, but only about machine learning. Consider neural networks, deep learning, and natural language processing. Ignore ideas that are too technical and just focus on selling these processes to customers. 

  • Prompt pattern: Please include “blank.” Consider “these aspects of blank.” Ignore “this aspect of blank.”

What are prompt patterns used for?

Prompt patterns leverage LLMs' capabilities to produce the results you want from them. They create a form and structure that impacts the accuracy of LLM results, saving you time and achieving better outcomes. Prompt patterns work in one aspect of prompt engineering, which functions as programming for interacting with LLMs.  

Who uses prompt patterns?

Anyone interacting with LLMs and generative AI benefits from using prompt patterns to obtain accurate results. This also includes those working in digital media, social media, and business who interact with LLMs for research, brainstorming, and content creation. Health care workers can also use prompt patterns to help make decisions, automate administrative tasks, communicate with patients, research and diagnose, and train. 

Prompt engineers work specifically with prompting patterns to help businesses and organizations interact with LLMs and AI most effectively and efficiently. According to Glassdoor, prompt engineers in the US earn an average annual salary of $105,282 [1]. Prompt engineers create ways to maximize usage with LLMs and have critical skills in natural language processing, how AIs work, and how to craft human language for translation by the LLM.

Read more: How to Become a Prompt Engineer: Skills You Need + Steps to Take

Pros and cons of using prompt patterns

Prompt engineering is a growing field that is rapidly changing. Because of this, it has a set of pros and cons. Let’s take a look at both its benefits and limitations. 


One of prompt engineering's biggest pros is its ability to produce accurate information from an LLM efficiently. Here are some additional advantages of prompt engineering:

  • It's a fast way to test whether an LLM responds well to a specific prompt. You can try many kinds of prompt patterns and find which ones best suit your work.

  • Prompt engineering improves user experience with LLMs by having you get to solutions quicker.

  • Effective prompt patterns scale easily within an organization to save time and resources over and over.


One of the biggest drawbacks of prompt engineering is that it requires a basic understanding of how LLMs work in order to create your own prompt patterns. However, you can use ready-made prompt patterns to start. Additionally, if you pay for an LLM like GPT-4 or another program, the cost of tokens increases based on the amount used in each input and output. Most free-to-use chatbots like ChatGPT, Gemini, or Copilot have a cap on how many tokens you can input, which decreases the number of examples, questions, and patterns you can put into one input, limiting lengthy experiments and prompt patterns.

How to get started in prompting

Prompt patterns are an effective tool for reusing prompt engineering techniques repeatedly to save time and create efficient, accurate responses. To start using prompt patterns, try some of the examples in this article or start developing your own based on your specific needs. Here are some steps you can take to develop a prompt pattern:

  1. Create a goal or problem you want to answer using an LLM of your choice. 

  2. Learn how that specific LLM works and what its strengths and weaknesses are to best craft your prompts. 

  3. Use prompt engineering techniques or existing prompt patterns to give the LLM initial input questions. 

  4. Interpret its responses and incorporate what you learned to improve the prompt pattern. Ask it further questions or use other prompt pattern techniques like examples, correcting its mistakes, or how you can improve the prompt.

  5. After the prompt pattern reaches your desired effectiveness, store it as a template to reuse for similar problems in the future. 

Getting started with Coursera

To further improve your prompting skills or to develop in-demand skills as a prompt engineer, consider taking online courses. Explore the Generative AI for Everyone course from DeepLearning.AI to learn basic prompt engineering skills. Or try the Prompt Engineering Specialization from Vanderbilt University for an in-depth look at prompt engineering, both found on Coursera. Upon completion of either courses, gain a shareable Professional Certificate to include in your resume, CV, or LinkedIn profile.

Article sources

  1. Glassdoor. “How much does a Prompt Engineer make? https://www.glassdoor.com/Salaries/us-prompt-engineer-salary-SRCH_IL.0,2_IN1_KO3,18.htm.” Accessed April 14, 2024. 

Keep reading

Updated on
Written by:

Editorial Team

Coursera’s editorial team is comprised of highly experienced professional editors, writers, and fact...

This content has been made available for informational purposes only. Learners are advised to conduct additional research to ensure that courses and other credentials pursued meet their personal, professional, and financial goals.