◉ An Effective Methodology
History is not a neat photo album; it’s more like a jigsaw puzzle with half the pieces missing, and the rest scattered in someone’s attic. People tell stories. Some get recorded. Some get embellished. Others get forgotten. And sometimes we’re left with claims so unusual that our first instinct is to squint and say, Really?
The problem is, debates about historical claims often get stuck in two unhelpful extremes:
✓ One side says, “It’s written down, so it must be true.”
✓ The other says, “If it’s strange, it must be false.”
Both approaches skip the actual work: figuring out how much the evidence we have should move our belief one way or the other. What we need is a tool that quantifies plausibility — something that treats history a bit like science, where we update our confidence based on the strength and weakness of the evidence.
Why Missing Evidence Isn’t Neutral — It’s a Clue

Imagine your friend claims that last night, during rush hour, an elephant walked across the Brooklyn Bridge. You check the news. Nothing. No photos, no social media posts, no eyewitness chatter. The complete absence of reports isn’t just a “lack of extra evidence” — it’s active evidence against the claim.
Why? Because if it happened, the event would have been highly public, easy to notice, and almost impossible to ignore. Silence in these cases is loud.
This is the core of what historians sometimes call the argument from silence. The trick is knowing when the silence is meaningful. If an event is private, obscure, or likely to go unrecorded, then the absence of sources means little. But if it’s a showstopper — something everyone would see — then missing corroboration is damning.
This distinction is critical, because without it, people can cherry-pick any isolated text or fragment and treat it as sufficient proof for an event that would, in reality, leave far more footprints.
A Plain-Language Decision Framework
Before we get into math, here’s the common-sense version of how to filter historical claims:
- Is it extraordinary?
✓ Does it clash with established knowledge about the world?
✓ Example: “A royal decree was issued” — mundane. “A god descended into the marketplace and turned the river to wine” — extraordinary. - Is it public?
✓ Would large numbers of people have directly witnessed it?
✓ Example: A private conversation between two generals — not public. A meteor exploding over a capital city — public. - Would we expect strong reporting?
✓ If it happened, would chroniclers, letters, or records have been made?
✓ Example: Major battles, coronations, or plagues generate records. - How scarce is the evidence?
✓ Do we have multiple accounts, or just one fragile scrap? - How independent and reliable are those accounts?
✓ Multiple copies of one bad source aren’t independent confirmation. - ◉ Do we see silence where we’d expect noise?
✓ If trusted observers of the day fail to mention it, that’s highly relevant.
Think of it as a checklist where each “yes” on the left side (extraordinary, public, high-expectation) raises the bar for the kind of evidence we’ll accept.
The Formal Historical-Claims Model
This is where we turn that plain-language checklist into something structured — a set of variables and relationships that can actually be calculated.
➘ Our starting point is the claim and a prior probability
— basically, how likely we thought it was before considering any new evidence.
➘ We explicitly tag whether the claim is extraordinary or mundane, since extraordinary claims start with a lower prior credence.
➘ Publicness matters because it determines how much reporting we’d expect. “” captures that expected reportage level — low, medium, or high.
➘ Here we note whether evidence is scarce, and we define the set of sources ().
➘ This function tells us if two sources are truly independent — critical for avoiding the trap of “copy-paste confirmation.”
➘ Each source gets a reliability profile:
- Quality score (accuracy, detail, internal consistency)
- Gap in years from the event
- Bias rating (how motivated the author is to spin the story)
- Anonymity flag (1 if anonymous)
- Tampering suspicion score
➘ We calculate the likelihood ratio — how much a source moves the probability up or down — then adjust it for quality, bias, and other penalties.
➘ This flags cases where credible observers, who should have mentioned the event, say nothing.
➘ The likelihood ratio for silence. If silence is much more probable when the event didn’t happen, this number will be small, hammering the claim’s credibility.
Instantiating the Model: Dragons Over Athens

Let’s apply the model to a fictional but instructive case.
Claim: “During the height of the Greek empire, dragons flew over Athens.”
Set Variables: — extremely low prior credence because dragons contradict all known zoology and physics.
— thousands would have witnessed it.
— it should have flooded ancient records.
— we have only one surviving source.
— base likelihood ratio is weakly supportive.
Adjust for penalties:
Silence factor: — strong negative weight, since silence from contemporary historians is nearly impossible if the claim were true.
Interpretation:
We start with tiny odds (0.0001). We multiply by a shrunken source likelihood (0.192), then by the silence factor (0.01). The final probability approximates zero.
This is the mathematical expression of common sense: if thousands would have seen it, and there’s just one shaky source written centuries later, it didn’t happen.
Why This Approach Works
The power here is in making the reasoning explicit. Instead of vaguely saying “That’s unlikely,” we specify:
- Why the prior is low (extraordinary nature)
- Why expected reportage is high (public spectacle)
- Why a lone, low-quality, biased, and anonymous source can’t outweigh the silence of all others
When applied to real history, this protects us from giving undue weight to isolated or dubious accounts. It’s not about cynicism — it’s about calibrating our confidence to match the actual evidential landscape.
Applying the Model to The Resurrected Saints Claim

One of the most striking — and often overlooked — supernatural claims in the New Testament is in Matthew 27:52–53, where it is stated that, upon Jesus’ death, “many bodies of the saints who had fallen asleep were raised” and “appeared to many” in Jerusalem. At face value, this is an extraordinary, public, and testable claim. Let’s see what happens when we run it through our historical-claims model.
✓ Step 1 — Setting the Variables
✓ Step 2 — Walking Through the Reasoning
➘ Extraordinary claim: This is not a mundane historical note; it directly contradicts all observed biology. That sets the base prior credence extremely low.
➘ Public nature: The text says they “appeared to many,” in a major city during a religious festival. This makes — meaning, if true, we would expect abundant independent reports.
➘ Scarcity of sources: We have a single, anonymous source written decades later with no corroborating documents, no public inscriptions, no mention in other Gospels, and no Jewish or Roman records — despite this allegedly happening in a politically and religiously volatile city under Roman oversight.
➘ Silence penalty: This is the model’s most devastating factor. For a high-visibility public event, multiple independent attestations are expected. The complete silence of other observers yields a very low .
➘ Bias and gaps: The sole source has strong theological motives () and a significant temporal gap between the supposed event and its recording (
years), both of which push credibility down.
✓ Step 3 — Model Output
The combined effect of:
✓ low prior (),
✓ high expected reportage (),
✓ extreme scarcity (), and
✓ devastating silence penalty ()
…drives the posterior credence into the negligible range. Under this model, the rational conclusion is that the claim can be safely dismissed as historically implausible.
Why This Matters
The “hundreds of saints” passage is an ideal stress-test for the historical-claims model because it’s the type of event that would absolutely leave multiple independent traces if it happened. The complete lack of such corroboration — combined with the extraordinary nature of the claim — renders its probability extremely low.



Leave a comment