//The Keeper of the Granaries

6–9 minutes

The Parable of the Praefectus

In the transition from Republic to Empire, Rome suffered from a familiar ailment: uncertainty.

The harvests were not especially bad, but rumors traveled faster than wagons. A ship delayed in Ostia became a story of famine by the time it reached the Forum. Bread prices flickered like nervous birds. The people did not starve, but they worried that they might.

To address this, the state relied on the Praefectus Annonae, the Prefect of the Provisions.
Historical note.
The Praefectus Annonae was a real Roman office formalized under Augustus.
Its mandate covered grain imports (primarily from Egypt and North Africa), storage,
price stabilization, and distribution to Rome’s urban population.
Failure of the annona was not an economic issue—it was a regime-threatening event.

But as politics grew louder, the people noticed that the granaries mattered more than the Senate.

It was then that a new kind of Keeper was appointed. Let us call him Severus.

Severus was not a demagogue. He was a man of the establishment—wealthy, educated, and fluent in the ledgers of the merchant class. But unlike his predecessors, who viewed the granary as a math problem, Severus viewed it as an instrument of statecraft.

He agreed with what everyone whispered: the grain supply had become too critical to be left to blind arithmetic.

Severus did not burn the ledgers. He merely began coordinating with the Emperor. When unrest loomed near a festival, shipments arrived early. When the treasury was light, reserves were managed to depress prices. Nothing was done crudely. Severus despised vulgarity. He would sigh and say, “To pretend the granary exists outside the Empire is naïve.”
Composite character.
“Severus” is not a single historical figure but a composite of late-Republic and early-Imperial administrators
who blurred the boundary between technical offices and imperial discretion.
The parable tracks a real transition: logistics becoming an instrument of statecraft.

Merchants began to watch the palace more closely than the weather. Bakers adjusted prices not based on supply, but on rumors of the Emperor’s mood.

The grain still moved. The bread still baked. No famine came.

Yet the old magic was gone. Where once the people believed grain arrived because the system worked, they now believed it arrived because a powerful man willed it so. The “risk premium” of politics had entered the price of bread.

The Two Legitimatics

The parable is not really about grain. It is about the Federal Reserve, and specifically, the structural shift represented by the potential appointment of Kevin Warsh.

To understand the stakes, we must distinguish between the two ways an institution claims the right to exist.

  • Type I Legitimacy (The High Priesthood): Legitimacy derived from esoteric process. “We have the right to decide because we possess a technical expertise you cannot understand, bound by rules you cannot manipulate.” This is the legitimacy of the nuclear physicist or the neurosurgeon. It relies on the “Black Box.”
  • Type II Legitimacy (The Courtier): Legitimacy derived from alignment and outcome. “We have the right to decide because we are responsive to the needs of the nation and its elected leaders.” This is the legitimacy of the wartime general or the cabinet secretary. It relies on the “Open Door.”
For forty years, the Federal Reserve has performed the Theater of the High Priesthood.
Theory lineage.
This distinction loosely maps to Max Weber’s contrast between legal-rational authority (rule-bound, impersonal) and charismatic or patrimonial authority (personal, situational).

Central bank independence is a modern attempt to preserve the former under democratic pressure.

Under Chairs like Bernanke and Yellen, the Fed presented monetary policy as a physics problem. They spoke in “Fedspeak”—a Delphic dialect designed to sound like mathematical inevitability.

This was a “Strategy of the Crown.” By claiming their decisions were the output of DSGE models rather than human choices, they absolved themselves of political blame. The Senate cannot yell at an equation.

But the audience has grown bored. The models failed to predict the inflation of 2021. The “transitory” narrative collapsed. The Black Box has cracked.

The Anti-Accord

Enter Kevin Warsh.

It is factually incorrect to paint Warsh as an outsider barbarian. He is a Stanford-educated lawyer, a former Fed Governor, and a fixture of the Hoover Institution. He is not anti-technocrat. But he represents a distinct break from the academic priesthood.

Warsh is the Courtier. He speaks the language of markets and power, not the dialect of the faculty lounge. And his central proposition—the “Severus Moment”—is his call for a new relationship between the Fed and the Treasury.

Warsh has argued for a “new Treasury-Fed Accord.” To the casual observer, this sounds technical. To the historian, it is an ironic inversion.

The original Treasury-Fed Accord of 1951 was a divorce decree. It liberated the Fed from the Truman administration’s demand to keep interest rates low to fund war debt. It established the modern, independent central bank.

Warsh’s proposal is a wedding vow.

He argues that in an era of massive government debt, the Fed can no longer pretend to be single. He suggests the Fed and Treasury must coordinate on the plumbing of debt issuance and balance sheet management. He frames this as “sound government.”

But structurally, this moves the Fed from Type I (Independent Rules) to Type II (Coordinated Discretion).

The Mechanism of the “Inflation Premium”

Why does this matter? Why shouldn’t the two economic arms of the government talk to each other?

Here we must leave the metaphor and look at the math. The economic case for Type I independence rests on the Time Inconsistency Problem, famously formalized by Kydland and Prescott.

The problem is simple: Politicians always have an incentive to print money today to boost growth, promising to be responsible tomorrow. Because the market knows the politician has this incentive, the market discounts the promise. They expect inflation, so they raise prices immediately. You get the inflation without the growth.

An independent Fed solves this by functioning as a “commitment device.” It is Ulysses tying himself to the mast. Because the market believes the Fed is a robot that doesn’t care about elections, the market doesn’t price in “political risk.”

If Warsh implements a “coordinated” Fed, he unties Ulysses.

If the bond market believes the Fed Chair is coordinating with the White House to “optimize debt issuance” or “support growth,” the Term Premium on long-term bonds will rise.

The yield on a 10-year Treasury bond is roughly:

Yield=(ExpectedShortRates)+(TermPremium)Yield = (Expected Short Rates) + (Term Premium)
The Term Premium is the extra compensation investors demand for uncertainty.
Empirical note.
Estimates of the term premium are model-dependent, but it is highly sensitive to
inflation credibility and institutional trust.
When policy discretion increases—even rhetorically—long-duration assets reprice quickly.

The irony is that a “helpful” Fed might actually drive up borrowing costs. By trying to keep rates low for the President, the Courtier convinces the market that the currency is unsafe, causing long-term rates to spike.

The Priesthood is Dead

And yet, we must be fair to the Courtier. There is a strong argument that the Priesthood is already dead, and we are just waiting for the funeral.

We live in an era of Fiscal Dominance. US Debt-to-GDP is over 120%. Interest payments on the debt are becoming a massive line item in the federal budget.

In this environment, the idea that the Fed can ignore the Treasury is a fantasy. If the Fed raises rates too high, it could technically bankrupt the government. Therefore, the Fed is already constrained.

Warsh’s argument is that we should stop pretending. If the Fed refuses to coordinate, it risks a chaotic collision where Congress strips its independence entirely. The Courtier argues: “Better to coordinate voluntarily and influence the King, than to be beheaded by him later.”

By formalizing coordination, Warsh might argue he is actually saving the system from a worse fate: total politization by a populist Congress.

The Price of Clarity

The transition from a Priesthood to a Courtier is a shift from a System of Rules to a System of Men.

Kevin Warsh may be a man of immense virtue. He may intend to use his influence to enforce discipline. But institutions survive on the assumption that personnel doesn’t matter.

If the “New Accord” is signed, the “old magic” of the 1951 divorce vanishes. The grain will still arrive. The bread will still bake. We will not see hyperinflation overnight.

But everyone will know, with cynical clarity, that the price of money is set not by a neutral model, but by the needs of the State. And once the market learns that stability is a political choice rather than a structural guarantee, the cost of that stability goes up forever.

//The Eleven-Minute Bug

7–10 minutes

The Northumbrian Equinox

Around the year 725, at the monastery in Jarrow, the monk known as the Venerable Bede completed De temporum ratione—”On the Reckoning of Time.” It stood as the most sophisticated technical manual of its age, a guide to the computus: the complex mathematical system used to calculate the date of Easter.

To the modern eye, the medieval obsession with the date of Easter looks like pedantry. In reality, the computus functioned as the operating system of Western civilization. Because Easter was a “movable feast”—calculated as the first Sunday after the first full moon following the spring equinox—its date dictated the entire liturgical calendar.
Historical note.
While primarily religious, the liturgical calendar often served as the de facto coordination layer for secular administration, influencing when taxes were collected, when soldiers were mustered, and when legal contracts expired.

The computus was the temporal stack upon which the Middle Ages ran. And it contained a bug.

Bede’s system relied on the Julian calendar, which assumed the solar year was exactly 365.25 days long. But the tropical solar year is approximately 365.2422 days. The discrepancy—about eleven minutes—is a ghost. It is an anomaly so minuscule that it remains invisible within the span of a single human life. If you lived in the eighth century, your tables matched the sky perfectly.

But the computus was not a local tool; it was a synchronized infrastructure. From the fjords of Scandinavia to the plains of Lombardy, every monastery copied the same tables and rang their bells at the same intervals. Because the system was perfectly coordinated, the error did not cancel out. It compounded.

Δcumulative=t=1n(365.25365.2422)t\Delta_{cumulative} = \sum_{t=1}^{n} (365.25 – 365.2422)_t

By the early thirteenth century, those eleven minutes had added up to roughly seven days. By 1582, the divergence was ten days. The “Spring Equinox” of the official tables occurred while the actual sun was still deep in its winter transit. The monks were looking at their books and seeing a reality that no longer existed.

The danger was not that the monks were incompetent, nor that they were ignorant of the shift. By the later Middle Ages, scholars were well aware of the discrepancy. The problem was the impossibility of correcting it without fracturing a synchronized Christendom. The danger was that they were all perfectly, harmoniously wrong.

The Synchronization Multiplier

The parable is not really about grain or Easter mass. It is about the architecture of consensus. We assume that we have traded monks for engineers and vellum for CI/CD pipelines, believing our modern infrastructure—built on the logic of silicon—is immune to the slow decoupling of the Middle Ages.

But the computus crisis demonstrates a fundamental law of systems: The Synchronization Multiplier.

In a fragmented system, errors function as noise. If one monastery in Gaul miscalculates the moon, they might feast a week early, but the rest of the world remains aligned with the stars. The mistake is contained by the lack of coordination. But in a synchronized system, an error is not noise; it is a signal. When the entire world uses the same tables, there is no local corrective mechanism.

Synchronization suppresses correction because the cost of local deviation is immediate and high. If a single node corrects its own table to match reality, it becomes incompatible with the network. Therefore, rational actors will defer correction even when the error is detected, preferring to be “wrong with the group” rather than “right alone.”
Falsifiable Hypothesis.
Organizations using a single dominant foundation model will detect systematic failures later than those using enforced model disagreement thresholds.

We see the “loud” version of this today in global IT outages. When a provider like CrowdStrike pushes a flawed update, it propagates instantly. We also see it in the “quiet ubiquity” of vulnerabilities like Log4Shell, where a standardized dependency creates a universal fault line waiting to be triggered. The system behaves as designed—instant synchronization—causing total failure.

However, the “loud” failure is the safer one. It invites immediate repair. The more dangerous failure is the “quiet” one—the Eleven-Minute Bug. This is a “correct” output that invisibly diverges from the territory it maps.

Intelligence as Infrastructure

We are currently crossing a threshold where Large Language Models (LLMs) are moving from assistive tools to judgment infrastructure. Until recently, our technical abstractions encoded rules. A compiler does not have an opinion on your logic; it follows a deterministic grammar. These are the “hard” tables of our era.

But as we integrate AI into the core of our development stacks—through agentic workflows and automated code refactoring—we delegate the layer of Judgment to the models. This is not merely a matter of suggestion; it is a matter of defaults. When judgment is embedded in the default settings of CI/CD pipelines and code review tools, opting out becomes an act of resistance rather than a neutral choice.

When millions of developers use the same underlying model to decide how a system should be architected, they are aligning the judgment of a global industry. We are building a new, global computus.

The Mechanism of Systemic Divergence

Here we must leave the metaphor and look at the mechanism. To understand the risk, we must distinguish between “hallucinations” and Systemic Divergence.

  • Hallucination: A loud error (e.g., $2+2=5$). These are bugs to be squashed.
  • Systemic Divergence: A statistical mean. The model suggests a pattern that is plausible, standard, and helpful, but contains a tiny, systematic skew away from hard technical reality.
Consider Cryptographic Divergence. Even in domains with specialized tooling, the gradient toward readable abstraction creates a steady pull away from hardware-faithful reasoning. An LLM, prioritizing patterns found in general-purpose software, may suggest “clean” variable-time comparison functions.
Technical note.
In a timing attack, an adversary deduces the key by measuring how long the CPU takes to process it. “Optimized” or “readable” code is often less secure than “constant-time” code for this reason.

If a generation of engineers relies on the same model to audit their security layers, the model’s preference for “clean” logic becomes the industry standard. The systems pass all unit tests. They look flawless to human reviewers. But the underlying hardware—the actual “stars”—remains bound by the physics of voltage and clock cycles. We build a world of code that is eleven minutes disconnected from how silicon actually processes instructions.

The Benchmark Trap

In the thirteenth century, to challenge the table was to challenge the infrastructure of Christendom. We are building our own circular validation loop: The Benchmark Trap.

We evaluate LLMs based on their performance on benchmarks like MMLU or HumanEval. These benchmarks are now part of the training data. Furthermore, as AI generates more of the world’s content, models are increasingly trained on their own previous judgments—a phenomenon researchers call “Model Collapse.”

This is not a buzzword; it is a specific statistical failure mode where synthetic outputs increasingly dominate the training data, narrowing variance and reinforcing prior statistical biases. We are checking the tables by looking at other copies of the tables. Synchronization creates an epistemic monoculture where a single “eleven-minute” error does not just survive; it becomes the new Spring Equinox.

The Moral Misdiagnosis

The most fascinating element of the medieval computus crisis was the reaction to the divergence. They did not say, “We need to update our solar year constant.” Instead, they looked for moral reasons. They blamed the corruption of the papacy or the decay of the universities.

We see this pattern emerging in the AI discourse. When synchronized systems show signs of divergence—brittleness, bureaucracy, supply chain failure—our first instinct is to blame “lazy developers” or “corporate greed.” Moral explanations are cognitively cheaper than structural ones. They misdiagnose a systematic divergence in judgment as a failure of individual character. We will spend decades arguing over the heresy of the models while the eleven-minute bug continues to pull our infrastructure away from the stars.

The Gregorian Option: Forced Desynchronization

In 1582, Pope Gregory XIII patched the calendar with a hard fork: he deleted ten days. It was a brilliant technical solution that caused a social catastrophe. What would a “Gregorian Reform” for synchronized intelligence look like?

It requires a deliberate Desynchronization Strategy. These strategies are expensive, slow, and locally irrational—just as the Gregorian reform was. But they are necessary to introduce adversarial friction:

  • Forced Heterogeneity: We must resist converging on a single “best” model. Organizations should employ “N-version modeling,” where distinct models suggest architectural decisions, and human review is required if they disagree.
  • Epistemic Diversity as Redundancy: Universal agreement between models is not a sign of truth, but a risk signal. If every AI agent agrees a refactor is “best,” that is precisely when a human should look for the error.
  • The Hardware Equinox: We must maintain “monks” who look at the stars without the tables—engineers who write code and audit logic without AI assistance, serving as a control group for reality.

The Stars and the Tables

The monks of Jarrow were not sloppy; they were the most disciplined technicians of their era. Their tragedy was their success. They built a system so coherent and universal that it silenced the stars for a thousand years.

We are currently building our own tables. We are exhilarated by the alignment AI offers. But the solar year does not care about our tables. Reality—whether the physics of a semiconductor or the limits of resources—remains indifferent to our consensus.

The question is not whether the models are smart. The question is: What are they wrong about by eleven minutes?