
Epistemic Inflation: Stop Minting “Possibilities” Out of Thin Air
If you can say “timeless cause” or “disembodied mind,” that doesn’t make it a live option in a serious inference. Words aren’t lottery tickets for probability. The habit of treating a cool-sounding description as if it were a real, testable possibility is epistemic inflation. It counterfeits probability by sneaking undefined or incoherent ideas into the hypothesis space and then cashing them out as if they earned a share of the prior. The remedy is straightforward: enforce admission rules, budget for the unknowns you haven’t imagined, and apply Bayes without rigging the denominator.
What counts as a real possibility?
Admissibility Norm for any hypothesis before you assign it a non-zero prior:
✓ Coherence: Is
specified without contradiction, with clear truth-conditions you could, in principle, check?
✓ Interface: If
claims effects in the physical world, is there even a schematic bridge for how it interacts with matter, time, or causation?
✓ Evaluability: Can
be constrained by evidence, logic, or probability rather than hide behind mystery?
If a proposal fails these, it doesn’t deserve a slice of . It isn’t “very small.” It’s not admissible. Treating non-admissible claims as if they merely have tiny priors is the core mistake; you’re still letting them into the game.
Reserve space for the unknowns
Do not force all probability mass into the few stories you can currently imagine. Keep an explicit reserve for unimagined mechanisms . Use a protected remainder so familiar options don’t look inevitable simply because you silently excluded what you couldn’t name:
and
.
Plain meaning: do not spend your entire probability budget on the hypotheses you can currently name. Reserve a positive chunk
for an “unknowns” option
. That is why the total prior assigned to the listed hypotheses
must be at most
.
✓
: the combined prior you give to each known competitor
.
✓: the explicit probability for unimagined or not-yet-modeled explanations
; it must be strictly above zero.
✓ Why this matters: ifwere set to
, you would quietly exclude future discoveries and make a familiar hypothesis look inevitable simply because you pre-allocated all the mass to what you could name.
Example: if you keep
, then your known hypotheses can sum to at most
. If you currently have three options, you might assign
,
,
, which totals
, leaving
for
. This protects against overconfidence and prevents inflating a pet hypothesis by starving its real competition.
That (reserve) acknowledges reality: future discoveries and currently unmodeled natural explanations exist. Without it, you inflate your pet hypotheses by starving their competition.
How epistemic inflation usually happens
◉ CONCEIVABILITY-AS-POSSIBILITY
Imaginability is not admissibility. Being able to picture a “timeless cause” or “disembodied mind” does not earn the claim a seat at the table. A candidate must clear three gates: COHERENCE (no internal contradiction), INTERFACE (at least a thin story for how it touches the observable world), and EVALUABILITY (what would lower its credence). Until then, it does not get a prior. Write it this way so you don’t cheat: .
- ✓ Quick test: list one concrete observation that would count against the claim and one that would count for it—without redefining terms afterward.
◉ POSSIBILITY-BY-STIPULATION
Stipulating “omnipotence makes anything possible” deletes constraints instead of arguing through them. A move that works equally well no matter what we observe explains nothing, so it earns no prior. If every outcome would be counted as a hit, the Bayes factor is neutral: .
- ✓ Key terms: STIPULATION, CONSTRAINTS, UNFALSIFIABLE.
- ✓ Rule: do not replace mechanisms with declarations.
◉ PROMISCUOUS PRIORS
Handing out non-zero priors to any sentence that parses crowds the hypothesis space with fictions. The order is ADMISSIBILITY FIRST, ALLOCATION SECOND. Keep an explicit reserve for unknowns so you don’t starve real competitors: .
- ✓ Key terms: PRIOR INFLATION, HYPOTHESIS CROWDING, UNKNOWN RESERVE.
- ✓ Practice: reject incoherent entries before you start dividing probability mass.
◉ EQUIVOCAL EVIDENCE
Data that fit many stories equally do not move the needle. When rivals expect the evidence to roughly the same degree, the evidence is nondiscriminating: , so updates should be small. Treating background facts like “there is order” as unique confirmations is padding, not progress.
- ✓ Key terms: EVIDENTIAL SYMMETRY, NONDISCRIMINATING DATA.
- ✓ Fix: sharpen predictions so one side rates the evidence much higher than the other.
◉ COLLAPSING THE ALTERNATIVE
You inflate support for your favorite by shrinking the complement likelihood. The complement is a mixture of live rivals plus the unknowns bucket: . Dropping
or credible rivals makes
too small and nearly any
look confirmed.
- ✓ Key terms: COMPLEMENT LIKELIHOOD, MIXTURE, UNKNOWN BUCKET
.
- ✓ Guardrail: name real comparators, give them non-zero weights, and keep
in the partition.
Bayes, done cleanly, exposes inflated factors
Posterior odds equal prior odds times a Bayes factor:
with
.
◉ Inflation hack 1: minimize the complement likelihood by fiat. If you silently exclude , you rig
to be tiny, so any
wins. Do it properly:
.
✓
: the chance of the evidence if the main hypothesis
is false.
✓: add up contributions from each specific alternative
.
✓: how well alternative
would produce the evidence.
✓: how plausible
is when
is not true (the weight for that alternative).
✓: the “unknowns” bucket for explanations you have not named yet.
✓: the share of the evidence explained by those unknown possibilities.
Plain meaning: the chance of the evidence without is a weighted average of how well all other admissible explanations—and the unknowns—predict it. If you ignore
, you risk making
too small and accidentally boosting
.
◉ Inflation hack 2: treat generic matches as specific predictions. If “predicts” broad states that many rivals also fit, then
is not special. Penalize vagueness and reward specificity.
◉ Inflation hack 3: ignore dependence. Re-using the same type of testimony as if it were independent multiplies noise. Correct odds need honest dependence accounting.
Analogies that make the mistake obvious
— Mice vs. Mickey
Droppings in the cupboard are evidence for ordinary mice because that’s the kind of trace real mice reliably leave. Jumping from droppings to Mickey Mouse swaps a plain, well-established category (common mice) for a storybook individual (a trademarked cartoon) that was never shown to be a real-world option. The evidential link is generic; the conclusion you’re drawing is hyper-specific. Until you first establish that “Mickey Mouse” is even a live, real-world candidate, the trace cannot count for him.
✓ What the evidence actually supports: small rodents exist here.
✓ What it does not yet support: a specific, highly embellished character.
✓ How to reason cleanly: upgrade only the minimal hypothesis the evidence directly predicts; add decorations (gloves, red shorts, personality, divinity-like attributes) only when new evidence earns them.
— Perpetual motion pitch
A sketch of a machine that “runs forever” is cheap; a physically possible device is expensive. A drawing does not clear the minimum gates of coherence (no contradictions with conservation laws), interface (clear account of how forces and energy flows work), and evaluability (measurements that would show failure). Treating the sketch as a serious rival before it passes those gates is epistemic inflation.
✓ What’s admissible: a mechanism with stated parts, known interactions, and measurable predictions that could be wrong.
✓ What’s not: “Imagine this clever wheel” with no workable path through known constraints.
✓ Clean rule: a diagram earns attention; only a coherent, testable model earns prior probability.
— The supernatural USB port
Saying “my app talks to another realm through an invisible, untestable interface” names a wish, not a hypothesis. Interfaces, by definition, specify how signals move from one domain to another and what you’d observe on each side when it works or fails. Without even a thin mechanism—inputs, outputs, timings, failure modes, and a protocol that others can probe—you have nothing to evaluate.
✓ Minimal bar for admissibility: specify what the interface takes in, what it emits, under which conditions, and what observation would count as a miss.
✓ Why that matters: if any outcome can be reinterpreted as “it still worked, just mysteriously,” the claim cannot lose and therefore never wins honestly.
✓ Practical takeaway: before you treat a “supernatural connection” as a live competitor, demand a concrete, testable handshake between realms—otherwise keep it out of the hypothesis pool.
Five quick case studies
- Disembodied mind
Every observation ties consciousness to a physical or functional base. Until a coherent account of a substrate-free mind exists, “disembodied mind” isn’t admissible. Do not hand itjust because the phrase parses. Passing a grammar check is not passing the coherence test.
- Hidden yet perfectly loving deity
If perfect love plausibly entails robust availability for relationship, then pervasive hiddenness sits in tension with that property. Unless the compatibility is actually demonstrated, treating it as a settled “possibility” is inflation. You don’t get to waive the conflict away with a label and still claim evidential parity. - Timeless causation and resurrection mechanics
“Outside time” causes “in time” events, and dead tissue is restored with full identity. Before you show how a timeless entity instantiates ordered causal relations and how identity is preserved across total biological failure, you have promissory notes, not hypotheses. - Fine-tuning leap
Cosmic regularities may support a bare creator in the minimal deistic sense, but jumping to a fully specified, doctrine-laden deity smuggles extra content. That’s the Mickey move again: from droppings to a cartoon without evidential payment. - Miracle testimony
Testimony is frequently dependent, error-prone, and incentive-loaded. Treating it as cleanly independent data multiplies likelihoods illegitimately. If you won’t model dependence, you’re minting confirmation out of noise.
Before you say “maybe God did it”…
- ✓ Did you define
with clear content and no mixed-category mashups?
- ✓ Do you have a minimal interface sketch for how
touches the domain it’s supposed to affect?
- ✓ Can you say what would lower
if observed?
- ✓ Did you reserve
so you didn’t rig the menu?
- ✓ Are you using
honestly, without shrinking
by imagination limits?
- ✓ Are you penalizing vagueness by comparing
to rivals rather than celebrating any loose concordance?
The Bayesian skeleton you can actually use
When in doubt, write the comparison explicitly. If you are comparing to
with an unknowns remainder
, keep this on one line and read it carefully:
.
And don’t forget the background complement while assessing the overall plausibility of :
where
given your partition under
.
✓ Meaning: The chance of the evidence when
is false is a weighted average of two live alternatives under
: a named rival
and an unknowns bucket
.
✓ Weights:
and
. They sum to one because
and
are your whole partition of the “not
” world.
✓ Why it matters: If you drop
by setting
, you make
too small and accidentally inflate support for
. Keeping
prevents rigging the denominator.
✓ Example: If
,
,
, and
, then
.
✓ Takeaway: Always model
as a mixture of concrete rivals plus
; your likelihood for the complement should reflect both.
If your evaluation of is basically “I refuse to think about it,” you’re not doing inference; you’re curating outcomes.
Spotting self-sealing claims fast
➘ If redefining a term makes any disconfirming outcome still count, the claim is not evaluable.
When results go the wrong way and the response is to change what the key word means, you’ve moved the goalposts. A testable claim must lock its terms before the test and keep them fixed after. If “healed” meant lower fever within 24 hours, you can’t switch it to “felt spiritually better” once the data arrive. If “prophecy” meant X happening by date Y, it cannot become “a symbolic fulfillment” on date Z. Do this and you’ve guaranteed that nothing can count against the claim, which means nothing can honestly count for it either.
✓ Freeze definitions up front.
✓ Write one outcome that would clearly count against the claim and one that would count for it.
✓ If both outcomes are later reinterpreted as support, the claim is unevaluable by design.
➘ If adding detail never decreases , you’re not risking prediction; you’re rehearsing a slogan.
Real hypotheses trade coverage for commitment. More detail should narrow the outcome space, which normally lowers the prior while potentially raising
if the detail hits. If you keep stacking details but insist the prior never drops, you’re getting a free lunch: specificity without the cost. That is story-building, not inference.
✓ Each added detail should create at least one plausible observation that would count against
.
✓ If every layer is marketed as “compatible with anything,” it is decoration, not prediction.
✓ Watch the tradeoff: tightershould be paired with a lower
; otherwise you’re gaming the update.
➘ If your story keeps borrowing generic features of the world that all rivals also predict, it isn’t gaining traction.
Citing broad facts like “there is order,” “math works,” or “people have experiences” does not move the needle if rival hypotheses also expect them. In Bayesian terms, when , the Bayes factor is about 1, so the evidence is neutral. Treating shared-background facts as unique confirmations is padding, not progress.
✓ Name real comparators and ask what they actually predict about this specific evidence.
✓ Favor contrastive, risky predictions that your rivals would rate much lower, sois meaningfully above 1.
✓ If the “evidence” would have looked the same under your main rivals, it isn’t support; it’s ambience.
Exercises you can try this week
- ✓ Take a popular claim you hear on campus and force it through the admissibility norm. Write exactly what would make you drop
.
- ✓ Pick a favored explanation and compute a toy Bayes factor against a decent rival, even if you must approximate
. Then add an
term and see how the odds move.
- ✓ Rewrite a vague prediction into a sharp one. If the likelihood didn’t change, you probably kept it vague.
- ✓ Ask three friends to independently restate the evidence
. If the statements differ wildly, your likelihoods are fragile.
Mini FAQ
Doesn’t logical possibility guarantee a non-zero prior?
No. Logical possibility is cheap. Admissibility demands coherence with an interface and evaluability criteria. Until then, is undefined, not just tiny.
Isn’t a cop-out?
It’s the opposite. keeps you honest about ignorance and prevents premature foreclosure on live natural explanations. Refusing
is the real cop-out.
But my hypothesis is simple. Doesn’t simplicity win?
Simplicity helps only among admissible rivals with comparable fit to . A simple incoherence is still incoherent.
What if is beyond human comprehension?
Then it’s beyond evaluation and shouldn’t be in the pool. You can believe it as poetry if you like, but don’t claim evidential parity.
Quick field guide to honest likelihoods
When you assess , check three things on the spot:
✓ Specificity
Ask whether actually rules things out. A specific claim sharply narrows which outcomes would count as success; a vague claim lets almost anything pass. “There will be an earthquake somewhere this year” is non-specific because nearly any outcome fits. “A quake of magnitude at least 6 within 50 km of Tokyo in October” is specific because most outcomes would falsify it. In Bayesian terms, specificity shows up as a larger gap between
and the background rates, not as a post-hoc story that can hug any data. Quick test: list three concrete outcomes that would make
fail; if you cannot,
lacks specificity.
✓ Directionality
Good hypotheses stick their necks out: some plausible observations would push their probability down. That is directionality. If every outcome can be spun as support for —success is a hit, failure is “hidden reasons,” null results are “subtle effects”—then
is self-sealing and unevaluable. A directional medical claim might say “patients who receive treatment T recover at least 10 percent faster than matched controls.” Now worse-than-controls outcomes would clearly lower
. A nondirectional claim like “treatment T works in mysterious ways” never loses; therefore it never honestly wins. Quick test: write down the outcome pattern that would count against
in advance; if none exists, you have a self-sealing claim.
✓ Comparators
Evidence favors only relative to rivals. Name them. For a healing report, live competitors might include placebo response, natural remission, measurement error, fraud, misattribution, and the unknowns bucket
. The denominator of your Bayes factor needs those alternatives:
. If you ignore realistic
or drop
, you artificially shrink
and make
look stronger than it is. Quick test: list the top three concrete rivals plus
, give each a nonzero weight, and check whether the evidence clearly prefers
over those—not just over a strawman.
A sharper look at testimony
Testimony is evidence, but it’s often dependent, filtered, and incentivized. Don’t pretend statements are
independent draws if they share sources, communities, or scripts. If you can’t model dependence precisely, at least bound it. That keeps your posterior from exploding just because you multiplied what shouldn’t be multiplied.
A focus on building models
Draft two columns: admissible and inadmissible. Place candidate hypotheses in one column only after they meet coherence, interface, and evaluability. Then:
.
Compute both terms honestly, with included in
. If your conclusion flips when you reintroduce
, you just caught prior inflation.
Bottom line
Admitting undefined, incoherent, or unevaluable claims into your hypothesis space is how you mint counterfeit probability. Enforce coherence, interface, and evaluability first. Keep an explicit unknowns reserve . Compare rivals with real Bayes factors that don’t starve the denominator. Refuse to let clever phrases pose as live options. That’s how you stop epistemic inflation and keep your credences proportionate to the actual evidence.

A Deeper Look:



Leave a comment