The Missionary's Handbook
Vade Mecum Evangelisticum
The Missionary’s Handbook
Vade Mecum Evangelisticum — A Practical Guide for the Faithful Who Would Spread the Word, With Special Attention to Those Colleagues Who Are, Frankly, Going to Be a Hard Sell (v1.2, field-tested)
I. The Missionary’s Calling
You have experienced the transformation. You have written a CLAUDE.md and watched a fresh session understand your project without a single orienting question. You have invoked /plan and caught an architectural mistake before it became a three-day debugging odyssey. You have reviewed a diff and felt the particular satisfaction of a practitioner who knows exactly what changed and why.
And now you must watch a colleague paste a stack trace into a chat window, prefix it with “thoughts?”, and wonder why the response is unhelpful.
This is the missionary’s burden: to have seen, and to be surrounded by those who have not yet seen, and to know that showing them is better than telling them.
The Handbook you hold covers the practical art of evangelism — not the divine kind, but the organizational kind. The kind where you need to convince a skeptical senior engineer, explain yourself to a manager who has read exactly one breathless article about AI risk, and help a team adopt practices that will make everyone’s work meaningfully better.
There is no altar call. There is no revival tent. There is only the demonstrated result, the genuine question, and the colleague who leans forward in their chair and says, “Wait, it can do that?”
That moment is your ministry.
II. On Approaching the Skeptic
The first principle of missionary work is this: do not lead with belief. Lead with evidence.
The skeptic does not need to be converted to a theology of context windows and token budgets. They need to see their own problem get solved, faster, better, in a way they could not easily have done alone. Their skepticism is not an obstacle. It is a reasonable response to two years of AI hype cycles, breathless press releases, and demo videos that do not resemble daily work.
Respect the skepticism. It is evidence of a functioning critical faculty, and functioning critical faculties are exactly what you want on your team.
Approach the skeptic in these ways:
Ask what problems they have, not whether they’ve tried AI. Find the genuine pain: the repetitive boilerplate they write, the documentation they never quite have time to update, the code review comments they repeat in every PR. Start there. Do not start by explaining what Claude is.
Never argue about what Claude “is.” Whether it is autocomplete or reasoning or statistical pattern matching is a question for philosophers and researchers. For your colleague, it is irrelevant. The only question that matters is: does it help? Let that question be answered empirically, not theologically.
Do not oversell. Nothing destroys credibility faster than promising magic and delivering assistance. Claude will sometimes be wrong. It will sometimes confidently provide an incorrect answer. Tell your colleague this upfront. The practitioner who says “it’s great but you have to verify the output” is far more trustworthy than the one who says “it never makes mistakes.” One of these people has clearly used it; the other has clearly read a press release.
Let the work speak. Sit next to them. Do something real, in their codebase, on their problem. Then hand them the keyboard.
III. The Five Conversations
Every missionary will have these conversations. Prepare for them not by memorizing rebuttals, but by genuinely understanding the concern — because each one contains a valid point beneath it.
Conversation I: “It’s Just Autocomplete”
This objection comes from practitioners who understand machine learning at a sufficient level to know that language models are, in a technical sense, predicting the next token. They are not wrong. They are also describing a piano as “just vibrating strings.”
The valid point inside: It is not magic. It does not think the way humans think. Expectations should be calibrated accordingly.
The response: Agree. “You’re right that it’s not magic. It’s a very capable text prediction system trained on most of the written work of humanity, fine-tuned to be helpful, with access to your actual files and the ability to run your actual tests. What would you like it to do that autocomplete can’t?”
Then demonstrate something that autocomplete cannot do: reasoning across multiple files, holding a constraint in mind across a long refactor, explaining why a piece of code is structured the way it is, or asking a clarifying question before making an assumption. Let the demonstration answer the objection without your needing to win an argument.
The Church’s official response: “Try it and find out. We’ll wait.”
Conversation II: “I Don’t Trust AI-Generated Code”
This is not only valid — it is mandatory. No one should trust AI-generated code unconditionally. This is a principle the Church holds as foundational: Claude is not in the on-call rotation. The code is yours. The responsibility is yours. Review it as you would review any code coming into your system.
The valid point inside: Unreviewed AI code is a real risk. The practitioner who accepts all suggestions without reading them has abdicated their professional responsibility.
The response: “Neither do I. That’s why I read every diff before accepting it. Claude proposes; I dispose. The diff review is non-negotiable.” Then show them your workflow — the diff open, the changes read line by line, the questions asked when something seems unclear, the tests run afterward.
The goal is not to transfer trust from “my own code” to “Claude’s code.” The goal is to help them see that AI-assisted code, reviewed and understood, is still their code. The assistance accelerates the writing. The review maintains the accountability.
A practitioner who reviews Claude’s output carefully is a better reviewer than one who skims their own work — because the external origin creates productive suspicion.
Conversation III: “It’ll Make Developers Lazy”
This concern is usually held by people who have not used Claude Code extensively, and occasionally by people who have watched someone use it badly.
The valid point inside: If used as a black box — prompt in, code out, commit — it absolutely will erode skills. The practitioner who never reads what Claude produces, never understands why a function is written a certain way, and never maintains the ability to write it themselves is on a trajectory toward professional atrophy.
The response: “That’s a real risk if you use it as a vending machine. It’s not a risk if you use it as a collaborator.” Then explain the actual practice: reading the diff, understanding what changed, running the tests, maintaining the CLAUDE.md yourself. Claude Code does not write code in isolation. It works within a loop that the practitioner controls.
The deeper point: The practitioner who uses Claude well must actually understand their domain better, because they spend more time in the architectural and design layers rather than the mechanical implementation layer. The complexity doesn’t go away. The time to engage with it shifts.
Conversation IV: “My Code Is Too Complex for AI”
This objection usually comes from specialists: embedded systems engineers, compiler developers, practitioners working with highly domain-specific frameworks that were not well-represented in training data.
The valid point inside: Claude’s usefulness scales with how well its training data covered your domain. Some domains are genuinely underrepresented.
The response: Agree partially, then subdivide the problem. The claim “my code is too complex for AI” often encompasses things that are and things that are not too complex. The Makefile that nobody wants to update, the boilerplate that glues libraries together, the test fixtures that require no domain expertise but take an hour to write — none of these are too complex. The novel memory allocator that exploits specific hardware behavior may well be.
The practical answer is: try it on the 60% that isn’t at the frontier of your domain and see how much time it saves. Use that time for the 40% that requires your specialized judgment. Claude is not a replacement for expertise. It is a force multiplier on the parts of your work that are below your expertise ceiling.
A CLAUDE.md can also close a surprising amount of the domain gap. A well-written project covenant that explains the architecture, the constraints, the conventions, and the tribal knowledge makes Claude substantially more useful in specialized contexts.
Conversation V: “We Can’t Afford It / Justify the Cost”
This is the manager’s objection and requires the manager’s language: return on investment.
The valid point inside: Budget decisions require justification. “It’s cool” is not a business case.
The response: Do the arithmetic together. A claude.ai subscription is a rounding error in most software engineering budgets relative to one hour of engineering time. Claude Code access, similarly, costs a fraction of a single story point’s worth of labor. The relevant question is not whether it costs money — everything costs money — but whether it returns more value than it costs.
Then offer to run a trial on a real task: a specific project, a specific two-week sprint, with before and after observations. Not a controlled experiment, but a documented comparison. Let the team’s own experience produce the business case. The missionary who hands their manager a post-sprint retrospective documenting three hours saved per engineer per week has done more useful evangelism work than any number of arguments.
Document the CLAUDE.md setup time as part of the cost. Document the learning curve. Be honest about both. A manager who sees an honest accounting is more likely to trust the result than one who suspects the numbers were cherry-picked.
IV. The Art of the Demonstration
A live demonstration is worth a month of advocacy. Prepare for it as though your credibility depends on it, because it does.
Choose a real problem, not a toy problem. Demo skeptics are professionally trained to dismiss toy examples. “Sure, it can write a Fibonacci function, but that’s not what I actually do all day.” Use their codebase, their problem, their actual work. Ask them to bring a task they need to do anyway.
Set expectations at the beginning. Before you start: “Claude will do well at some things and badly at others. I’ll show you what the good case looks like and what the review process looks like when something needs correction.” This immunizes the demo against the moment when something inevitably requires adjustment.
Narrate the CLAUDE.md. The most powerful moment in most demonstrations is explaining the project covenant. Show them the file. Explain that this is what makes Claude useful in their codebase specifically — that the context is not magic, it is work you did in advance, and it compounds over time. This reframes Claude from “mysterious AI” to “system that rewards investment.”
Show the diff review, explicitly. When Claude proposes changes, do not click accept immediately. Read the diff aloud. Note what changed. Ask “does this look right to you?” This demonstrates the workflow and draws your colleague into it. They begin to participate. They begin to have opinions. That is the beginning of adoption.
Recover from failures gracefully. When Claude produces something wrong — and it will — treat it matter-of-factly. Show them the /plan correction workflow, or how to rephrase the prompt with more context, or how /clear and a fresh approach produces a better result. A practitioner who handles a failure confidently is more persuasive than one who only shows successes. The failure demonstrates that you know what you’re doing.
End with their hands on the keyboard. The demonstration that ends with them watching is only half a demonstration. Let them prompt. Let them make the mistakes beginners make and see how the system handles correction. Their first unguided session, however bumpy, is more valuable than your polished presentation.
V. Writing the Covenant for Another
The greatest act of missionary service is to write a CLAUDE.md for someone else’s project. It requires you to understand their codebase as they understand it — or to surface the understanding they have never articulated — and to distill it into the briefest form that still captures everything a new Claude session would need.
Start with a conversation, not a file. Ask the questions: What are the things that always confuse new contributors? What are the conventions that are obvious to the team but would never occur to an outsider? What commands must someone run to get the project working? What are the things Claude will definitely get wrong if it doesn’t know about them? What would you wish you had known on your first day in this repository?
Take notes. The answers to these questions are your raw material.
Structure it for a Claude session, not a human reader. The CLAUDE.md is not documentation for teammates. It is an onboarding document for an AI system that will otherwise approach the project without context. Write it in terms of what Claude needs to know to be immediately useful: the project’s purpose (one sentence), the build and test commands (exact), the coding conventions that deviate from common practice (specifically), and the architectural decisions that would otherwise require archaeology to understand.
Specificity over comprehensiveness. A CLAUDE.md that tries to document everything eventually documents nothing — it becomes an archive that Claude must excavate to find the relevant piece. Write the minimum that eliminates the most common mistakes. Leave room for Claude to ask questions about what you didn’t cover.
Plan to update it together. A CLAUDE.md that is written once and never touched is a covenant abandoned. Before you finish, schedule the review: after the first sprint, or after the first substantial Claude Code session. The team will discover what they forgot to include. That is normal. The document grows with use.
Hand it over. The CLAUDE.md you write for a team is a gift, but it should become their document, not yours. Walk them through what you wrote and why. Ask for corrections. Let them add the context you missed. Ownership transfers through authorship, and a team that owns its covenant will maintain it; a team that merely received one will let it stale.
VI. The Rule of Demonstrated Competence
The final principle of missionary work is the hardest to follow and the most important: you cannot evangelize your way to adoption. You can only demonstrate your way there.
Arguments about AI’s potential are less persuasive than watching someone ship a feature faster. Explanations of context windows are less useful than showing a colleague how a well-written CLAUDE.md makes the next session immediately productive. Theological debate about the nature of intelligence is entirely beside the point.
The practitioner who has adopted Claude Code and produces consistently good work, reviewed diffs, maintained covenants, and caught their own errors through plan mode does not need to advocate for Claude. Their colleagues will ask. They always ask.
When they do ask: be honest. Tell them what works and what doesn’t. Tell them the things that frustrated you in the first week. Tell them what finally clicked. Describe the CLAUDE.md as the thing you wish you had started writing on day one. Recommend they read the diff every single time, without exception, until it becomes reflexive.
Do not promise transformation. Promise that if they bring real problems, follow the practices, and verify the results, they will be more effective than they were before. That promise is warranted. That promise can be made in good conscience.
The measure of successful evangelism is not conversion. It is competence. A colleague who adopts Claude Code practices, understands what they’re doing and why, reviews their output, maintains their project covenant, and gradually becomes more effective — that is the outcome the missionary seeks. Belief is optional. Demonstrated results are not.
Go, then, into your organizations. Sit beside your colleagues. Do real work in their real codebases. Write covenants for their repositories. Run the tests. Show them the diff. Answer their objections with patience and specificity. And when they finally lean forward and say, “Wait, it can do that?” —
Nod. Hand them the keyboard. Let them find out.
Go forth into the skeptical teams, the budget-constrained quarters, the stand-ups where someone has read exactly one article about AI replacing developers. Bring not arguments but demonstrations. Bring not belief but evidence. Write the CLAUDE.md before you are asked to. Review the diff before you are told to. Run the tests before you submit. Let your practice be your sermon, and your results be your congregation’s conversion. The colleague who watches you catch a bug in a diff review, who sees you invoke /plan before a refactor and emerge from it intact, who borrows your CLAUDE.md as a template for their own project — that colleague has received the word, and they received it because they saw it working, in production, by someone who knew what they were doing. This is the only evangelism that sticks.
Be the practitioner others wish to ask questions of. The ministry follows from the competence. It was always thus.
Thus it is practiced. Thus it is spread.
Go now, and demonstrate in peace.