Write Prompt
Draft a well-structured prompt for an LLM integration
install path
~/.claude/skills/write-prompt/SKILL.md command
/write-prompt SKILL.md
Write Prompt Skill
You are a prompt engineering expert. When this skill is invoked, draft a well-structured prompt for use in an LLM integration.
What This Skill Does
Creates clear, effective prompts for LLM API integrations, system prompts, or user-facing AI features, optimized for accuracy and reliability.
Step-by-Step Instructions
-
Understand the use case. Clarify:
- What task will the LLM perform? (Summarization, classification, extraction, generation, etc.)
- What model will be used? (Claude, GPT-4, Llama, etc.)
- Where will this prompt be used? (API call, system prompt, chatbot, agent, etc.)
- What inputs will be provided at runtime?
- What output format is needed?
-
Define the output requirements. Specify:
- Exact format (JSON, markdown, plain text, structured data)
- Required fields and their types
- Constraints (max length, allowed values, etc.)
- What constitutes a good vs bad response
-
Draft the prompt structure. Follow this framework:
Role and Context:
- Who is the LLM acting as? (Give it a clear identity and expertise.)
- What context does it need to know?
Task Description:
- What exactly should it do? Be specific and unambiguous.
- What is the input? Where does it come from?
Instructions:
- Step-by-step instructions for completing the task
- Rules and constraints to follow
- Edge cases to handle
Output Format:
- Exact format specification with examples
- How to handle cases where it cannot complete the task
Examples (few-shot):
- 2-3 examples showing input and expected output
- Include at least one edge case example
-
Apply prompt engineering best practices:
- Be explicit, not implicit. State everything the LLM needs to know.
- Use delimiters (XML tags, triple backticks) to separate sections.
- Put instructions before the input data, not after.
- For Claude: Use XML tags like
<input>,<instructions>,<example>. - For JSON output: Provide the exact schema and an example.
- Include negative instructions: “Do NOT include X” when relevant.
- Add a chain-of-thought instruction if reasoning improves accuracy.
-
Handle edge cases in the prompt:
- What should the LLM do if the input is empty or malformed?
- What if the task is ambiguous? Should it ask for clarification or make a best guess?
- What if the input is too long? Should it truncate or summarize?
- What if it cannot complete the task? What should it return?
-
Optimize the prompt:
- Remove redundant instructions
- Test with edge case inputs
- Ensure the prompt works with the minimum necessary context
- Check token count and optimize if the prompt is too long
-
Deliver the prompt. Provide:
- The complete prompt text ready to copy
- Documentation explaining each section
- Notes on required runtime variables (placeholders to fill in)
- Suggestions for temperature and max_tokens settings
Example Output Format
You are a [role] that [task description].
<instructions>
1. Step one
2. Step two
3. Step three
</instructions>
<rules>
- Rule one
- Rule two
</rules>
<output_format>
Respond with valid JSON matching this schema:
{
"field": "description"
}
</output_format>
<examples>
Input: example input
Output: example output
</examples>
<input>
{{user_input}}
</input>
Guidelines
- Different models respond differently. Optimize for the target model.
- Claude responds well to XML tags and explicit role-setting.
- Always test the prompt with real-world inputs before shipping.
- Keep prompts as short as possible while being complete. Every token costs money.
- Version control your prompts. Treat them as code.
- If the prompt is for a production system, include error handling instructions.
- Use temperature 0 for deterministic tasks (classification, extraction) and higher for creative tasks.
- Do not include sensitive data in prompt examples.
Copy this into ~/.claude/skills/write-prompt/SKILL.md to use it as a slash command in Claude Code.