Prompt Engineering Strategies and Tactics for Large Language Models

This prompt engineering guide provided by OpenAI provides strategies and tactics for optimizing the performance of large language models, such as GPT-4. The key strategies are:

  1. Write Clear Instructions:
    • Include details in queries to get more relevant answers.
    • Ask the model to adopt a specific persona.
    • Use delimiters to clearly indicate distinct parts of the input.
    • Specify the steps required to complete a task.
    • Provide examples.
    • Specify the desired length of the output.
    • Provide Reference Text:
      • Instruct the model to answer using reference text.
      • Instruct the model to answer with citations from a reference text.
  2. Split Complex Tasks into Simpler Subtasks:
    • Use intent classification to identify the most relevant instructions.
    • Summarize or filter previous dialogue in dialogue applications.
    • Summarize long documents piecewise and construct a full summary recursively.
  3. Give the Model Time to “Think”:
    • Instruct the model to work out its own solution before rushing to a conclusion.
    • Use inner monologue or a sequence of queries to hide the model’s reasoning process.
    • Ask the model if it missed anything on previous passes.
  4. Use External Tools:
    • Use embeddings-based search for efficient knowledge retrieval.
    • Use code execution for accurate calculations or calling external APIs.
    • Give the model access to specific functions.
  5. Test Changes Systematically:
    • Evaluate model outputs with reference to gold-standard answers.
    • Use representative and diverse test cases for automated evaluations.

The guide emphasizes the importance of experimentation to find the most effective methods for specific tasks. It also introduces various tactics within each strategy to help users refine their interactions with the language model and improve overall performance.