The Book of Common Prompts
Formulae Supplicationum
The Book of Common Prompts
Formulae Supplicationum — Sacred Forms for the Practitioner in Need (v1.4, errata pending)
Preface
These are the liturgical forms — the ordered prayers for common occasions. Not because the faithful cannot pray in their own words, but because they have tried that, and it produced eight paragraphs of meandering context followed by a question Claude could not answer because the crucial detail was in the seventh paragraph, behind a semicolon, adjacent to an apology.
The Forms herein have been tested in the fires of actual development. Each was written because someone, somewhere, phrased their request poorly, received an inadequate response, and had to rephrase. This happened many times. A pattern emerged. The pattern was codified. The codification became sacred.
They are not magic. They are structure. Structure is as close to magic as the faithful are permitted to get.
Each Form contains three elements:
- The Prayer — the actual template, with
[brackets]marking the places where your particulars belong - The Rubric — the theological explanation of why this structure works
- The Worked Example — the Form made concrete, so that you may see the shape of it in use
Read the Rubric. The Rubric is not decoration. The Rubric tells you what the template is doing — and a practitioner who understands what a Form is doing can adapt it when their situation does not fit the template exactly.
The Form for Debugging
Formulae Investigationis Defectuum
When to Use: When you have a bug — something that is broken, something that produces the wrong output, something that fails in a way you do not understand. This Form is for when you know something is wrong but not why.
The Prayer:
I am debugging
[the language/framework/tool].What I expected:
[describe the expected behavior in one or two sentences]What is actually happening:
[describe the observed behavior — be specific about error messages, wrong output values, failure modes]The relevant code:
[paste the minimal code necessary to reproduce the issue — ideally something that can run]The error output / observed failure:
[paste the full error message or describe the exact wrong output]What I have already tried:
[list any approaches you have attempted, even failed ones — this prevents Claude from suggesting what you have already ruled out]Please help me identify the cause of this discrepancy.
The Rubric: This Form works because debugging is a search problem, and all search problems are bounded by constraints. The “expected vs. actual” structure focuses Claude on the gap rather than the entire system. The “what I have already tried” section is the most commonly omitted element and the most important: it prevents Claude from re-suggesting ruled-out solutions and signals the depth of your investigation. Pasting the full error output rather than paraphrasing it gives Claude access to the exact text, including line numbers, exception types, and stack frames that you might have summarized away.
A Worked Example:
I am debugging a Python Flask application.
What I expected: A
POSTto/api/userswith a JSON body should return201 Createdand the newly created user object.What is actually happening: The endpoint returns
200 OKwith an empty body. No error in the logs.The relevant code:
@app.route('/api/users', methods=['POST']) def create_user(): data = request.get_json() user = User(name=data['name'], email=data['email']) db.session.add(user) db.session.commit() return jsonify(user.to_dict())The error output: No error message — response body is
{}.What I have already tried: Confirmed the JSON body is being received correctly (added a print statement). Confirmed
user.to_dict()works in isolation with test data.Please help me identify the cause of this discrepancy.
The Form for Code Review
Formulae Examinis Sacri
When to Use: When you have written code and want Claude to review it — not just for syntax or style, but for correctness, design, edge cases, and potential failure modes. Use this Form when you want the review to be meaningful, not merely confirmatory.
The Prayer:
Please review the following
[language]code. I am looking for[what kind of review — choose from: correctness / edge cases / security vulnerabilities / performance / readability / architecture / all of the above].Context:
[one paragraph explaining what this code does and why it was written this way]The code:
[paste the code under review]Constraints to keep in mind:
[anything that should be preserved as-is — API contracts, performance requirements, third-party dependencies you cannot change, stylistic decisions made deliberately]Please be direct. If something is wrong, say so. If something could be better, explain how and why. If the code is correct and you have no substantive feedback, say that too.
The Rubric: Code review requests fail most often for one of two reasons: the scope is undefined, or the constraints are unknown. Without a scope, Claude reviews everything with equal weight, producing feedback that is hard to prioritize. Without constraints, Claude suggests changes that would break API contracts or introduce dependencies you cannot use. The explicit permission to be critical — “if something is wrong, say so” — matters more than it should; Claude is, by default, diplomatic, and diplomacy can round the edges off important feedback. The explicit permission to say “this is fine” also matters: it prevents Claude from manufacturing criticism to appear thorough.
A Worked Example:
Please review the following TypeScript code. I am looking for correctness and edge cases.
Context: This function is a middleware for rate limiting API requests. It checks a Redis cache for the request count and returns a 429 if the limit is exceeded. It runs on every request to the
/apiprefix.The code:
async function rateLimitMiddleware(req: Request, res: Response, next: NextFunction) { const key = `rate_limit:${req.ip}`; const limit = 100; const window = 60; const count = await redis.incr(key); if (count === 1) { await redis.expire(key, window); } if (count > limit) { return res.status(429).json({ error: 'Too many requests' }); } next(); }Constraints to keep in mind: We are using
ioredis. The Redis instance is shared with other services; do not change key naming conventions. We cannot use a Lua script for atomicity because the Redis instance is in cluster mode.Please be direct. If something is wrong, say so.
The Form for Refactoring
Formulae Renovationis
When to Use: When existing code works but must be changed — for readability, maintainability, performance, or to conform to a new standard. Refactoring requests are among the most commonly scoped incorrectly; this Form establishes clear boundaries to prevent Claude from rewriting things that did not need rewriting.
The Prayer:
I need to refactor the following
[language]code. Please do not change its external behavior.The goal of this refactoring:
[one sentence — what specifically should improve? Examples: reduce duplication / improve readability / extract into reusable functions / reduce cyclomatic complexity / split into smaller modules]What must not change:
[enumerate what is off-limits — function signatures that are part of a public API, observable behavior, file structure, test coverage]The code:
[paste the code to be refactored]The tests, if any:
[paste the existing test suite, or describe it if it is too long to paste]After refactoring, explain what you changed and why. Point out any places where you made a judgment call that I might want to reverse.
The Rubric: Refactoring fails when the scope is undefined. “Clean this up” is an invitation for Claude to rewrite your codebase in a style you did not ask for. A specific goal (“reduce duplication,” not “improve it”) gives Claude a target and a stopping condition. Specifying what must not change prevents breaking API contracts — the most common and costly refactoring mistake. Asking Claude to explain judgment calls surfaces the places where a reasonable practitioner might choose differently, transforming a one-shot output into a conversation.
A Worked Example:
I need to refactor the following Python code. Please do not change its external behavior.
The goal of this refactoring: Extract the validation logic into a separate function so it can be tested independently and reused in a second endpoint I am writing.
What must not change: The function signature of
create_user— it is called from twelve places and I am not changing callers. The return types. The error messages (they are user-facing and translated).The code:
def create_user(name: str, email: str, role: str) -> dict: if not name or len(name) > 100: raise ValueError("Name must be between 1 and 100 characters") if '@' not in email or len(email) > 255: raise ValueError("Invalid email address") if role not in ('admin', 'editor', 'viewer'): raise ValueError("Role must be admin, editor, or viewer") return {"name": name, "email": email, "role": role, "created_at": datetime.now().isoformat()}After refactoring, explain what you changed and why. Point out any places where you made a judgment call I might want to reverse.
The Form for Explaining Code
Formulae Illuminationis
When to Use: When you are reading code you did not write — or code you wrote six months ago and no longer recognize — and need to understand what it does, how it works, or why it was designed this way. Also use this Form before modifying unfamiliar code, so you do not unknowingly break the thing you are trying to fix.
The Prayer:
Please explain the following
[language]code.My background:
[describe your relevant experience — not to establish credentials, but to calibrate the explanation. Examples: I am fluent in Python but new to asyncio / I understand the business logic here but not the concurrency model / I have never worked in this codebase before]What I specifically need to understand:
[focus the explanation — "how data flows through this function" is more useful than "what does this code do"; examples: the purpose of the state machine / why the retry logic is structured this way / what happens when this condition is false]The code:
[paste the code to be explained]If there are aspects of this code that seem unusual, fragile, or that I should be careful around when making changes, please flag them.
The Rubric: Explanation requests fail most often because the level is wrong — Claude explains what the code does line by line when the practitioner already understands that and needs the why, or explains the high-level purpose when the practitioner needs the mechanical details of a specific subroutine. Specifying your background calibrates the vocabulary and assumed knowledge. Specifying what you need to understand focuses the explanation on the gap rather than on what Claude thinks is interesting. The final sentence — “flag things I should be careful around” — produces the most valuable output: the tacit knowledge that lives in the minds of the original authors and almost never makes it into comments.
A Worked Example:
Please explain the following Go code.
My background: I am comfortable with Go’s concurrency model but I have never worked with this codebase. I am trying to understand this before modifying the retry behavior.
What I specifically need to understand: Why the retry logic uses exponential backoff with jitter rather than a fixed delay, and what the
context.Contextcancellation is doing here — I am worried about modifying this incorrectly and causing goroutine leaks.The code:
func (c *Client) sendWithRetry(ctx context.Context, req *Request) (*Response, error) { var lastErr error for attempt := 0; attempt < c.maxRetries; attempt++ { select { case <-ctx.Done(): return nil, ctx.Err() default: } resp, err := c.send(ctx, req) if err == nil { return resp, nil } lastErr = err delay := time.Duration(math.Pow(2, float64(attempt))) * c.baseDelay jitter := time.Duration(rand.Int63n(int64(delay / 2))) select { case <-ctx.Done(): return nil, ctx.Err() case <-time.After(delay + jitter): } } return nil, fmt.Errorf("after %d attempts: %w", c.maxRetries, lastErr) }If there are aspects of this code that seem fragile or that I should be careful around when making changes, please flag them.
The Form for New Features
Formulae Novae Creationis
When to Use: When you are implementing a new feature — something that does not yet exist and must be built. This Form prevents the two most common failure modes: requirements that are too vague to implement correctly, and requirements that do not acknowledge constraints, leaving Claude to make architectural decisions you did not intend to delegate.
The Prayer:
I need to implement
[a brief, declarative name for the feature]in my[language/framework]project.What this feature does:
[describe the behavior from the user's or caller's perspective — what input does it receive, what does it do with that input, what output or side effects does it produce?]Relevant context about the project:
[paste any relevant existing code the feature must integrate with — the interface it must implement, the models it must use, the modules it must call]`Constraints:
[style / framework conventions to follow][third-party libraries that are available / unavailable][performance or scalability requirements, if any][error handling expectations]What success looks like:
[describe a concrete example — given this input, I expect this output / given this user action, I expect this system behavior]Before writing code, please briefly describe your approach and flag any ambiguities or assumptions. I would rather resolve those before you implement than revise after.
The Rubric: Feature requests fail when they omit the constraints and the integration points. Claude will build what you describe, and if you describe only the feature and not the system it lives in, Claude will invent a system for it — one that may not match yours. Providing the relevant existing code gives Claude the actual interfaces to implement against. The “what success looks like” section gives Claude a concrete test: something to aim for and something to verify against. The instruction to flag ambiguities before writing is the most valuable line in the Form. It converts a guess into a conversation, and good features are made in conversation, not in a single pass.
A Worked Example:
I need to implement user-facing email notifications for comment mentions in my Django project.
What this feature does: When a user is mentioned with
@usernamein a comment, they receive an email notification. The email should include: who mentioned them, a preview of the comment, and a link to the comment. Notifications should not be sent if the mentioned user has disabled email notifications in their settings.Relevant context:
# Existing comment model class Comment(models.Model): author = models.ForeignKey(User, on_delete=models.CASCADE) content = models.TextField() post = models.ForeignKey(Post, on_delete=models.CASCADE) created_at = models.DateTimeField(auto_now_add=True) # Existing user settings model class UserSettings(models.Model): user = models.OneToOneField(User, on_delete=models.CASCADE) email_notifications_enabled = models.BooleanField(default=True)Constraints:
- Use Django’s built-in email system (
django.core.mail)- Email sending must be asynchronous — we use Celery for background tasks
- Do not use regex to parse
@username— we have a utility functionextract_mentions(text)that returns a list of usernames- Emails must use our existing base template:
emails/base.htmlWhat success looks like: A comment containing “@alice” is saved. Alice has
email_notifications_enabled=True. Alice receives one email with a link to the comment. Alice hasemail_notifications_enabled=False. Alice receives no email.Before writing code, briefly describe your approach and flag any ambiguities.
The Form for Test Writing
Formulae Testamentorum
When to Use: When you need tests written — either for existing code that lacks coverage or for new code being developed. This Form exists because “write tests for this” is among the least specific instructions in the prompting canon, producing tests that assert things the code already trivially does, rather than probing its edges and failure modes.
The Prayer:
Please write
[unit / integration / end-to-end]tests for the following[language]code using[testing framework].What the tests should verify:
[do not just say "that it works" — enumerate the behaviors that matter: the happy path, the specific edge cases you are worried about, the error conditions that must be handled correctly]The code under test:
[paste the code]Testing constraints:
[mocking strategy — what should be mocked and what should use real implementations][any setup / teardown requirements][style conventions for the test suite, if any]Prefer tests that would catch a real bug over tests that merely confirm the happy path. If there are edge cases or error conditions I have not mentioned that seem worth testing, add them and note what you added.
The Rubric: Test writing fails when the scope is “all of it” without prioritization, producing comprehensive coverage of the trivial case and inadequate coverage of the failure modes. Enumerating what the tests should verify forces you to think about what you are actually afraid of, which is the most useful artifact the Form produces — the list exists before the tests do. The instruction to prefer tests that would catch a real bug reflects a truth that most testing guides underemphasize: a test that cannot fail is not a test. It is documentation with extra steps. The invitation to add unlisted edge cases leverages Claude’s ability to reason about failure modes you have not imagined.
A Worked Example:
Please write unit tests for the following Python code using
pytest.What the tests should verify:
- The happy path: valid inputs produce the correct parsed result
- Empty string input raises
ValueError- Input with valid date format but impossible date (e.g., February 30th) raises
ValueError- Input with the correct structure but an unrecognized timezone abbreviation falls back to UTC and includes a warning in the returned object
- The function is case-insensitive for the timezone abbreviation
The code under test:
def parse_event_datetime(dt_string: str) -> EventDateTime: """Parse a datetime string in the format 'YYYY-MM-DD HH:MM TZ'.""" if not dt_string: raise ValueError("datetime string cannot be empty") parts = dt_string.strip().split() if len(parts) != 3: raise ValueError(f"Expected 'YYYY-MM-DD HH:MM TZ', got: {dt_string!r}") date_str, time_str, tz_str = parts try: dt = datetime.strptime(f"{date_str} {time_str}", "%Y-%m-%d %H:%M") except ValueError as e: raise ValueError(f"Invalid date or time: {e}") from e tz = TIMEZONE_MAP.get(tz_str.upper()) warned = tz is None return EventDateTime(datetime=dt, timezone=tz or UTC, timezone_warning=warned)Testing constraints: Mock
TIMEZONE_MAPas a simple dict in the tests — do not import the real one. No setup or teardown needed.Prefer tests that would catch a real bug. If there are edge cases I have not listed that seem worth testing, add them and note what you added.
A Note on Using the Forms
These Forms are not incantations. They are not magic words that, spoken correctly, compel a perfect response. They are structured thinking — a way of organizing what you know about your problem before you present it to Claude. A practitioner who fills out the Form carefully will, by the end of filling it out, understand their own problem better than they did at the start. This is not a coincidence.
The Forms may be adapted. A Form for Debugging that does not include “what I have already tried” because you have tried nothing is still better than no Form at all. A Form for New Features that omits constraints because there genuinely are none is honest and appropriate. The structure serves you; you do not serve the structure.
When in doubt, default to more context rather than less. The cost of an unnecessary sentence is low. The cost of a missing constraint is measured in revisions.
Go forth, and prompt with structure.
May your expected behaviors be explicit, your constraints named, and your edge cases considered in advance rather than discovered in production.
May you arrive at the question fully prepared to ask it — because the practitioner who cannot describe their problem clearly enough to fill out this Form does not yet understand their problem clearly enough to solve it, and that clarity is available, right now, for the price of a few more minutes of thought.
The Form is not bureaucracy. The Form is a mirror. Look into it. The reflection will tell you what you need.
Thus it is written. Thus it is prompted.
The liturgy awaits. The terminal is open. Begin.