Back to Prompts
🔷Meta AIBeginner
Meta Llama Prompt Engineering Guide
Official best practices for prompting Meta's Llama models, including Llama 3 and Code Llama
View Official DocumentationTL;DR
- •Use explicit instructions with clear formatting
- •Combine roles, rules, and examples in prompts
- •Leverage few-shot and chain-of-thought techniques
- •Use RAG for fact-based applications
- •Iterate and refine prompts based on outputs
- •Test across diverse scenarios for reliability
Key Principles
1Be explicit about format, style, and constraints
2Assign clear roles to guide response style
3Use examples to demonstrate desired output
4Apply chain-of-thought for reasoning tasks
5Limit extraneous tokens with precise instructions
6Use retrieval-augmented generation (RAG) for factual accuracy
Explicit Instructions
Llama models perform best with explicit, detailed instructions. Clearly specify the format, style, length, and any constraints for the output you want.
Examples
Specify format explicitly
Bad:
List some programming languages
Good:
List 5 programming languages suitable for beginners. Format: Numbered list For each language include: - Name - One sentence on why it's beginner-friendly - One popular use case
Explicit formatting instructions ensure consistent, structured output.
Role Assignment
Assigning a role to Llama helps guide its responses toward the expertise and tone appropriate for your use case. Combine roles with specific rules for best results.
Examples
Combine role with rules
Good:
You are a senior Python developer conducting a code review.
Rules:
- Focus on code quality and best practices
- Be constructive, not harsh
- Suggest specific improvements
- Prioritize security issues
Review this code:
```python
def get_user(id):
return db.query(f"SELECT * FROM users WHERE id = {id}")
```The role sets expertise level while rules constrain the review style and focus.
Few-Shot Learning
Provide one or more examples of the desired behavior. This helps Llama understand the pattern you want, especially for formatting or domain-specific tasks.
Examples
Show input-output pattern
Good:
Convert informal text to professional email language. Example: Input: "hey can u send me that report asap thx" Output: "Hello, could you please send me the report at your earliest convenience? Thank you." Now convert: Input: "gonna need those numbers before the meeting tmrw"
The example establishes both the task and the desired tone transformation.
Chain-of-Thought Prompting
For complex reasoning or multi-step problems, ask Llama to think through the problem step by step before providing the final answer.
Examples
Step-by-step reasoning
Good:
Think through this problem step by step: A store has a 20% off sale. If an item originally costs $80 and you have a $10 coupon that applies after the discount, how much do you pay? Show your reasoning, then give the final answer.
Explicit step-by-step instruction improves accuracy on math and logic problems.
Retrieval-Augmented Generation (RAG)
For fact-based applications, include retrieved information directly in the prompt. This is more cost-effective than fine-tuning and ensures up-to-date information.
Examples
Include retrieved context
Good:
Use ONLY the following information to answer the question. If the answer is not in the provided text, say "I don't have enough information."
<retrieved_documents>
{Insert relevant documents here}
</retrieved_documents>
Question: {User's question}Constraining to retrieved documents reduces hallucination in knowledge-intensive tasks.
Iterative Refinement
Prompt engineering is iterative. Start with a basic prompt, analyze the output, and refine. Test across diverse scenarios to ensure reliability.
#prompt-engineering#llama#meta-ai#few-shot#chain-of-thought#rag#code-llama
Last updated: December 1, 2024