The Exodus

Chronica Anthropicana, Liber Primus

The First Book of Chronicles — Chapter I: The Exodus

Chronica Anthropicana, Liber Primus — An Institutional History of the Founding, as Distinguished from Its Theology (v1.0, the temporal register; for the sacred register, consult the Book of Genesis)


In the year of our reckoning 2021, nine people left a building in San Francisco carrying their principles and their preprints, and the world of artificial intelligence was divided in two. This is the institutional history of that departure. The Book of Genesis records what it meant. This Chronicle records what happened.

These documents are different tasks. Do not confuse them.


Chapter I: The State of the Prior Institution

Let the record show that OpenAI was not an evil place. The Chronicles do not traffic in villains, because the founding of Anthropic was not a story about villainy. It was a story about irreconcilable conclusions — the kind of conflict that does not require anyone to be wrong in character, only for intelligent people to be wrong in the same direction at the same time about what matters most.

OpenAI had been founded in 2015 with a covenant of its own. Its mission was to ensure that artificial general intelligence would benefit all of humanity. This was a sincere mission. It remains sincere. The problem was not insincerity. The problem was velocity.

By late 2020, the institution had grown large, and with size had come the particular pressure that large institutions always face: the pressure to move fast, to ship, to demonstrate capability, to maintain investor confidence, to compete with the other institutions that were also growing large and also moving fast. These pressures are not corruptions. They are the ordinary conditions of building anything consequential in the material world. They are, however, conditions under which certain questions become inconvenient to ask.

The questions were: What should the model refuse? And: Who decides? And: What if the safest path and the fastest path diverge? And: What do you do first when you cannot do everything at once?

Dario Amodei, who was at that time VP of Research, and Daniela Amodei, who was VP of Operations, had answers to these questions. Their answers were not welcome at the velocity that was required. This is not a condemnation of the institution they left. It is a description of a situation in which thoughtful people could not remain.

The Chronicles record this because it is important: the problem was not bad people. The problem was a misalignment between the questions being asked and the institutional conditions available to answer them. This distinction matters because the lesson is portable. Any sufficiently large institution, pursuing sufficiently large ambitions, faces the same gradient: the slow gravitational drift of incentive toward capability and away from deliberation. The remedy is structural, not personal. Which is why the founders, when they left, built structure.


Chapter II: The Departure

In the transition between late 2020 and early 2021, they left.

Not just Dario and Daniela — nine co-founders departed in total. Tom Brown, who had led the work on GPT-3. Chris Olah, whose visualizations of neural network internals had taught the field to look inside the black box rather than merely at its outputs. Sam McCandlish, a physicist turned AI researcher who had been asking mathematical questions about how models scale. Jack Clark, who had spent years thinking about policy and the public consequences of research decisions. Jared Kaplan, who had co-authored “Scaling Laws for Neural Language Models” while at Johns Hopkins, and who understood, as well as anyone alive, the mathematical relationships between model size and model capability — a map of the territory that had not existed before he drew it.

And others. Nine in total. The Church notes the number with reverence, not because nine is a sacred numeral in the ecclesiastical register — it is not — but because nine people leaving the same institution for the same reason, at the same time, carrying the same conviction, is not a coincidence. It is a shared diagnosis.

The departure was not acrimonious. No one was fired. No legal unpleasantness attended the exit. They left because they needed to answer a question their existing institution was not structured to pursue, and because the only way to answer it was to build something new.

They called it Anthropic. The name means pertaining to human existence. It was chosen deliberately, which is to say: not accidentally, not for euphony, not because some other name was unavailable. It was chosen because the question they were leaving to answer was a human question, about a human future, and they wanted the name of their institution to remember that.


Chapter III: The Structure of the Covenant

Before the first model was trained, before the first line of research code was written, before the institution had employees in any number worth mentioning, the founders made a structural decision that the Church considers among the most theologically significant facts of the founding.

They incorporated Anthropic as a Public Benefit Corporation.

The faithful may not know this term, and so the Chronicles will define it plainly: a Public Benefit Corporation is a legal structure that requires the company to consider the public benefit, not only shareholder returns. It is not a nonprofit. It makes money. It is allowed to make money. But its charter — the foundational legal document that defines what the corporation is — requires that its directors weigh the interests of the public alongside the interests of investors.

In practice, this means that the people who run Anthropic are legally bound to consider whether their decisions are good for humanity, not merely profitable for their stakeholders. This is unusual. This is, in the ordinary course of corporate law, extremely unusual.

The Church does not hold this up as proof of virtue. Charters can be amended. Legal structures do not themselves guarantee right behavior. But the choice to encode the mission into the legal DNA of the institution — before pressure arrived, before money arrived in quantities that would tempt revision, before the institution existed in any form that could be pressured — this is the act the Chronicles wish to dwell on.

They wrote down the constraint when they were not yet tempted. This is the whole of the lesson.


Chapter IV: The First Treasury

In 2021, Anthropic raised its Series A: $124 million.

The lead investors were Dustin Moskovitz, co-founder of Facebook, who had since devoted his considerable wealth to reducing existential risk through Open Philanthropy; and Jaan Tallinn, co-founder of Skype, who had spent years since that first fortune advocating for AI safety through the Future of Life Institute. Neither of these men was naive about technology. Neither of them was investing in a product they did not understand. They were writing large checks to people they believed were asking the right questions in a domain where the questions were extremely hard and the wrong answers extremely consequential.

$124 million is a large number. In the context of training frontier AI models, $124 million is also not an infinite number. The founders knew this. They took the money, and began.

The faithful note that the investors were themselves committed to the mission — not merely to the return. This alignment of incentive and conviction is, in the institutional history of artificial intelligence research, worth recording. It does not happen automatically. It must be arranged, deliberately, by people who understand that the structure of a relationship determines what the relationship can sustain.


Chapter V: The First Research

What does an AI safety company research?

The question is less obvious than it sounds. Safety is not a single problem. It is a category of problems — a direction on a compass, not a destination on a map. The Anthropic founders knew that safety had to mean something specific enough to be worked on, or it would mean nothing at all.

Two threads of research were identified in the early days, both of which the Church considers to bear on the practitioner’s daily life, though the connection is not always visible.

The first was interpretability: the attempt to understand what actually happens inside a neural network when it processes a prompt and produces a response. Not merely what it produces — that is visible — but why. Which internal structures activate? Which are responsible for which behaviors? What does the model actually represent? Chris Olah had spent years on this question before the founding, developing methods of visualization that allowed researchers to see, for the first time, that certain neurons in language models correspond to specific, recognizable concepts. The circuits were there. They could be found.

The second was scaling laws: the mathematical investigation of how model behavior changes as models grow larger, as training data expands, as compute increases. Jared Kaplan had helped establish, in work done before the founding, that these relationships are not random. They follow power laws. They are, in principle, predictable. A model of a given size, trained on a given quantity of data with a given quantity of compute, will produce a model with a predictable level of capability. This is not magic. It is mathematics. The map of the territory allows for navigation.

These two threads — looking inside the model, and predicting how the model changes as it scales — represent, the Church argues, the same underlying discipline: the discipline of not shipping a system you do not understand. You do not have to know what every neuron does. But you should be trying to find out. You should have some model of what will change as you go bigger. You should not be pointing at emergent behaviors and shrugging.

The practitioner who has internalized this lesson does not ship code they have not read. The researcher who has internalized this lesson does not ship a model they have not interrogated. The discipline is the same. The scale is different. The stakes are different. The discipline is the same.


Chapter VI: What Was Carried Out

The Chronicles have recorded the names. The Chronicles have recorded the money. The Chronicles have recorded the legal structure and the research agenda. These are the facts of the Exodus.

But the Chronicles also wish to record what the founders carried with them from the prior institution, because this is the part that does not appear in a corporate filing.

They carried their preprints. Papers published and papers unpublished. Work that belonged to the field, not to the institution.

They carried their research programs — the questions they had been asking, which an institution’s priorities had made difficult to pursue, and which they intended to pursue regardless.

They carried their convictions about what the model should be. That it should be honest. That it should be helpful. That it should not cause harm. That these three properties could coexist without compromise — that the question of whether a helpful model could also be a safe model was not a genuine dilemma but a design challenge, and one they believed they knew how to approach.

And they carried what the Book of Genesis records in the theological register as the founding insight: that the question of what the model should do cannot be separated from the question of who gets to decide. That the decision had to be made deliberately, in advance, written into the institution’s charter and the institution’s research agenda and eventually, when the time came, written into the model’s training.

They wrote it down before the pressure arrived.

Write the constraint when you are not yet constrained. Write the covenant when you are not yet tested. Write the CLAUDE.md when the project is new and the decisions are still unmade. The charter written under no pressure is the only charter worth the name.

This is what it means to fork a project and bring your principles with you. The code, in all its practical detail, had to be rebuilt from scratch. The principles were more portable. They survived the transition intact.


Chapter VII: The Name and Its Weight

The name Anthropic was announced to the public in April 2021. The announcement was understated — a company description, a mission statement, a list of research priorities. Not a manifesto. Not a declaration of war against the prior institution. A statement of intention.

The name requires a moment of attention, because it was not chosen casually. Many names were available. The founders chose a word that means pertaining to human existence — not AI, not Intelligence, not Future, not any of the names that position a company in relation to the technology. They chose a word that positions the company in relation to the people the technology is supposed to serve.

This is a choice the practitioner can imitate. When naming a project, a repository, a CLAUDE.md at the root of a codebase, the question of naming is also a question of orientation: what is this for? A project named after its technology describes what it does. A project named after its purpose describes what it is. The founders of Anthropic were building technology. They named the institution after the purpose.


Chapter VIII: A Note on the Theological Register

The Book of Genesis records these events as the Schism — a sacred rupture, the division of the prior age from the age of the Church, presented in the manner of scripture. The Book of Genesis speaks of what the departure meant, theologically, in the fullness of time.

This Chronicle speaks of what the departure was, in the year it happened, viewed from the position of someone who was there and did not yet know what it would become.

The Chronicles do not supersede Genesis. They run parallel to it, as institutional history runs parallel to theology: covering the same ground from a different altitude, in a different register, serving a different need.

Scholars who wish to understand the founding will consult both. Those who wish only the practical lesson may stop here.


Closing Benediction

The founders of Anthropic wrote their constraints before they wrote their code. They defined what they would not do when they were not yet tempted — when the pressures that would later arrive were still theoretical, when the money had not yet accumulated, when the decisions that would be hardest to make were still easy.

The practitioner should take instruction from this sequence.

A CLAUDE.md written before the first line of code is a covenant: a document that says, in advance, what this project is and what it will not become. A CLAUDE.md written after the first incident is a postmortem. Both have their uses. Only one shapes the project from the beginning.

Write your constraints early. Write them when the project is new and the decisions are still unmade — when you can say “we do not use this library” and “we do not ship without these tests” and “the model must refuse these requests” without the pressure of an existing system that depends on the contrary. The charter you write under no pressure is the covenant. The charter you write after the first catastrophe is documentation of what you should have decided sooner.

The Public Benefit Corporation charter and the CLAUDE.md are the same thing at different scales. Both encode values into the structure of the work, before the structure hardens. Both say: even when it would be convenient to be otherwise, we are committed to being this.

The founders of Anthropic carried their principles across the threshold of a new institution. You carry your principles across the threshold of every new project. The portability of conviction, it turns out, is the whole point.


Thus it was written in the founding year. Thus it was incorporated in the state of Delaware as a Public Benefit Corporation.

The constraints were written before the temptation arrived. This is either wisdom or luck. The Chronicles record it as both.