Back to Prompts
🔴CohereBeginner
Cohere Prompt Engineering Guide
Best practices for prompting Cohere's Command models effectively
View Official DocumentationTL;DR
- •Be clear, concise, and specific
- •Include examples of desired output format
- •Use delimiters with explanatory headers
- •Place instructions at the beginning
- •Start the completion yourself to guide format
- •Split complex tasks into smaller prompts
Key Principles
1Clarity and specificity are paramount
2Delimit different information types with headers
3Provide context and background when helpful
4Use examples for complex formatting
5Tell the model what to do, not what to avoid
6Control length explicitly when needed
Clear and Specific Prompts
The most effective prompts are clear, concise, specific, and include examples of exactly what a response should look like. Vague prompts lead to unpredictable outputs.
Examples
Be specific about requirements
Bad:
Write a product description
Good:
Write a product description for wireless earbuds. - Length: 50-75 words - Tone: Enthusiastic but professional - Include: Key features, target audience, call-to-action - Avoid: Technical jargon
Specific requirements ensure the output matches marketing needs.
Use Delimiters and Headers
Place instructions at the beginning of the prompt. Delimit different types of information (instructions, context, resources) with explanatory headers to help the model understand the prompt structure.
Examples
Structure with clear headers
Good:
## Instructions
Summarize the following article in 3 bullet points.
## Article
{article_text}
## Output Format
- Bullet point 1
- Bullet point 2
- Bullet point 3Headers clearly separate instructions from content and expected output.
Provide Examples
For complex formatting or specific output styles, showing examples is more effective than describing them. Let the model learn the pattern from your examples.
Examples
Show don't tell
Good:
Convert customer feedback to structured data.
Example:
Feedback: "Love the app but it crashes sometimes on my iPhone"
Output: {"sentiment": "mixed", "topic": "stability", "device": "iPhone", "action": "investigate crashes"}
Now convert:
Feedback: "The new update is amazing, especially the dark mode!"The example shows exact JSON structure and field extraction logic.
Do vs Don't Instructions
Tell the model what TO DO rather than what NOT to do. Positive instructions are clearer and more reliably followed than negative ones.
Examples
Positive instructions work better
Bad:
Don't use complex words. Don't write more than 100 words. Don't be formal.
Good:
Use simple, everyday language. Keep your response under 100 words. Write in a casual, friendly tone.
Positive instructions clearly define the target behavior.
Begin the Completion
You can guide the model's output format by starting the completion yourself. This is especially useful for ensuring specific formats like JSON or lists.
Examples
Start the response format
Good:
List 3 benefits of exercise. Benefits: 1.
Starting with "1." ensures a numbered list format.
Task Splitting
For complex tasks, split them into smaller, sequential prompts. Use the output of one prompt as input to the next. This improves accuracy and makes the process more controllable.
Examples
Break down analysis tasks
Good:
Task 1: Extract all company names mentioned in this article. Task 2: For each company, identify their industry. Task 3: Summarize the relationship between the companies. Let's start with Task 1.
Sequential tasks ensure each step is completed accurately before proceeding.
#prompt-engineering#cohere#command#few-shot#delimiters#task-splitting
Last updated: December 1, 2024