Prompt Engineering Best Practices
Prompt Engineering Best Practices
1. Be Clear and Specific
Vague prompts produce vague results. The more precise your instructions, the better the output.
❌ "Write about dogs"
✅ "Write a 200-word informational paragraph about Golden Retrievers'
temperament and suitability as family pets, aimed at first-time dog owners."
2. Use Delimiters
Separate different parts of your prompt clearly:
Summarize the text delimited by triple backticks:
```
{text to summarize}
```
3. Specify Output Format
Tell the model exactly what format you want — JSON, Markdown, bullet points, table, etc.
4. Provide Context
Give the model the background information it needs. Don't assume it knows your specific situation.
5. Iterate and Refine
Prompt engineering is iterative. Start simple, test, and refine based on results:
- Write initial prompt
- Test with diverse inputs
- Identify failure modes
- Add constraints or examples to address failures
- Repeat
6. Use XML Tags for Structure
XML tags help organize complex prompts (especially effective with Claude):
<instructions>Analyze the document and extract key entities</instructions>
<document>{content}</document>
<output_format>JSON with entities, types, and confidence scores</output_format>
7. Temperature and Sampling
- Low temperature (0-0.3): Deterministic, factual tasks
- Medium temperature (0.4-0.7): Balanced creativity and accuracy
- High temperature (0.8-1.0): Creative writing, brainstorming
🌼 Daisy+ in Action: Prompt Engineering in Production
Daisy+ follows these practices in production: structured output formats for data extraction, explicit role boundaries in system prompts, temperature tuning per use case (low for data queries, moderate for customer responses), and continuous prompt refinement based on conversation logs. Every prompt is version-controlled and tested against real conversation scenarios before deployment.
There are no comments for now.