If you can say “timeless cause” or “disembodied mind,” that doesn’t make it a live option in a serious inference. Words aren’t lottery tickets for probability. The habit of treating a cool-sounding description as if it were a real, testable possibility is epistemic inflation. It counterfeits probability by sneaking undefined or incoherent ideas into the hypothesis space and then cashing them out as if they earned a share of the prior. The remedy is straightforward: enforce admission rules, budget for the unknowns you haven’t imagined, and apply Bayes without rigging the denominator.

Admissibility Norm for any hypothesis H before you assign it a non-zero prior:

Coherence: Is H specified without contradiction, with clear truth-conditions you could, in principle, check?
Interface: If H claims effects in the physical world, is there even a schematic bridge for how it interacts with matter, time, or causation?
Evaluability: Can H be constrained by evidence, logic, or probability rather than hide behind mystery?

If a proposal fails these, it doesn’t deserve a slice of P(H). It isn’t “very small.” It’s not admissible. Treating non-admissible claims as if they merely have tiny priors is the core mistake; you’re still letting them into the game.

Do not force all probability mass into the few stories you can currently imagine. Keep an explicit reserve for unimagined mechanisms U. Use a protected remainder so familiar options don’t look inevitable simply because you silently excluded what you couldn’t name:

\sum_{i=1}^{k} P(H_i) \le 1 - r and r = P(U) > 0.

Plain meaning: do not spend your entire probability budget on the hypotheses you can currently name. Reserve a positive chunk r for an “unknowns” option U. That is why the total prior assigned to the listed hypotheses H_1,\dots,H_k must be at most 1 - r.

\sum_{i=1}^{k} P(H_i): the combined prior you give to each known competitor H_i.
r = P(U) > 0: the explicit probability for unimagined or not-yet-modeled explanations U; it must be strictly above zero.
✓ Why this matters: if r were set to 0, you would quietly exclude future discoveries and make a familiar hypothesis look inevitable simply because you pre-allocated all the mass to what you could name.

Example: if you keep r = 0.20, then your known hypotheses can sum to at most 1 - r = 0.80. If you currently have three options, you might assign P(H_1)=0.40, P(H_2)=0.25, P(H_3)=0.15, which totals 0.80, leaving 0.20 for U. This protects against overconfidence and prevents inflating a pet hypothesis by starving its real competition.

That r (reserve) acknowledges reality: future discoveries and currently unmodeled natural explanations exist. Without it, you inflate your pet hypotheses by starving their competition.

◉ CONCEIVABILITY-AS-POSSIBILITY
Imaginability is not admissibility. Being able to picture a “timeless cause” or “disembodied mind” does not earn the claim a seat at the table. A candidate must clear three gates: COHERENCE (no internal contradiction), INTERFACE (at least a thin story for how it touches the observable world), and EVALUABILITY (what would lower its credence). Until then, it does not get a prior. Write it this way so you don’t cheat: P(H)=\text{undefined until admissibility is shown, not merely tiny}.

◉ POSSIBILITY-BY-STIPULATION
Stipulating “omnipotence makes anything possible” deletes constraints instead of arguing through them. A move that works equally well no matter what we observe explains nothing, so it earns no prior. If every outcome would be counted as a hit, the Bayes factor is neutral: \mathrm{BF}=\frac{P(E \mid H)}{P(E \mid \neg H)}\approx 1.

◉ PROMISCUOUS PRIORS
Handing out non-zero priors to any sentence that parses crowds the hypothesis space with fictions. The order is ADMISSIBILITY FIRST, ALLOCATION SECOND. Keep an explicit reserve for unknowns so you don’t starve real competitors: \sum_{i=1}^{k} P(H_i) \le 1-r\ \text{ with }\ r=P(U)>0.

◉ EQUIVOCAL EVIDENCE
Data that fit many stories equally do not move the needle. When rivals expect the evidence to roughly the same degree, the evidence is nondiscriminating: P(E \mid H_1)\approx P(E \mid H_2), so updates should be small. Treating background facts like “there is order” as unique confirmations is padding, not progress.

◉ COLLAPSING THE ALTERNATIVE
You inflate support for your favorite by shrinking the complement likelihood. The complement is a mixture of live rivals plus the unknowns bucket: P(E \mid \neg H)=\sum_i P(E \mid H_i)P(H_i \mid \neg H)+P(E \mid U)P(U \mid \neg H). Dropping U or credible rivals makes P(E \mid \neg H) too small and nearly any H look confirmed.

Posterior odds equal prior odds times a Bayes factor:

\text{Odds}(H:E) = \text{Odds}(H) \times \mathrm{BF} with \mathrm{BF} = \frac{P(E \mid H)}{P(E \mid \neg H)}.

Inflation hack 1: minimize the complement likelihood by fiat. If you silently exclude U, you rig P(E \mid \neg H) to be tiny, so any H wins. Do it properly:

P(E \mid \neg H) = \sum_{i} P(E \mid H_i) P(H_i \mid \neg H) + P(E \mid U) P(U \mid \neg H).

P(E \mid \neg H): the chance of the evidence if the main hypothesis H is false.
\sum_i: add up contributions from each specific alternative H_i.
P(E \mid H_i): how well alternative H_i would produce the evidence.
P(H_i \mid \neg H): how plausible H_i is when H is not true (the weight for that alternative).
U: the “unknowns” bucket for explanations you have not named yet.
P(E \mid U)P(U \mid \neg H): the share of the evidence explained by those unknown possibilities.

Plain meaning: the chance of the evidence without H is a weighted average of how well all other admissible explanations—and the unknowns—predict it. If you ignore U, you risk making P(E \mid \neg H) too small and accidentally boosting H.

Inflation hack 2: treat generic matches as specific predictions. If H “predicts” broad states that many rivals also fit, then P(E \mid H) is not special. Penalize vagueness and reward specificity.

Inflation hack 3: ignore dependence. Re-using the same type of testimony as if it were independent multiplies noise. Correct odds need honest dependence accounting.

— Mice vs. Mickey

Droppings in the cupboard are evidence for ordinary mice because that’s the kind of trace real mice reliably leave. Jumping from droppings to Mickey Mouse swaps a plain, well-established category (common mice) for a storybook individual (a trademarked cartoon) that was never shown to be a real-world option. The evidential link is generic; the conclusion you’re drawing is hyper-specific. Until you first establish that “Mickey Mouse” is even a live, real-world candidate, the trace cannot count for him.

✓ What the evidence actually supports: small rodents exist here.
✓ What it does not yet support: a specific, highly embellished character.
✓ How to reason cleanly: upgrade only the minimal hypothesis the evidence directly predicts; add decorations (gloves, red shorts, personality, divinity-like attributes) only when new evidence earns them.

— Perpetual motion pitch

A sketch of a machine that “runs forever” is cheap; a physically possible device is expensive. A drawing does not clear the minimum gates of coherence (no contradictions with conservation laws), interface (clear account of how forces and energy flows work), and evaluability (measurements that would show failure). Treating the sketch as a serious rival before it passes those gates is epistemic inflation.

✓ What’s admissible: a mechanism with stated parts, known interactions, and measurable predictions that could be wrong.
✓ What’s not: “Imagine this clever wheel” with no workable path through known constraints.
✓ Clean rule: a diagram earns attention; only a coherent, testable model earns prior probability.

— The supernatural USB port

Saying “my app talks to another realm through an invisible, untestable interface” names a wish, not a hypothesis. Interfaces, by definition, specify how signals move from one domain to another and what you’d observe on each side when it works or fails. Without even a thin mechanism—inputs, outputs, timings, failure modes, and a protocol that others can probe—you have nothing to evaluate.

✓ Minimal bar for admissibility: specify what the interface takes in, what it emits, under which conditions, and what observation would count as a miss.
✓ Why that matters: if any outcome can be reinterpreted as “it still worked, just mysteriously,” the claim cannot lose and therefore never wins honestly.
✓ Practical takeaway: before you treat a “supernatural connection” as a live competitor, demand a concrete, testable handshake between realms—otherwise keep it out of the hypothesis pool.

  1. Disembodied mind
    Every observation ties consciousness to a physical or functional base. Until a coherent account of a substrate-free mind exists, “disembodied mind” isn’t admissible. Do not hand it P(H) > 0 just because the phrase parses. Passing a grammar check is not passing the coherence test.
  2. Hidden yet perfectly loving deity
    If perfect love plausibly entails robust availability for relationship, then pervasive hiddenness sits in tension with that property. Unless the compatibility is actually demonstrated, treating it as a settled “possibility” is inflation. You don’t get to waive the conflict away with a label and still claim evidential parity.
  3. Timeless causation and resurrection mechanics
    “Outside time” causes “in time” events, and dead tissue is restored with full identity. Before you show how a timeless entity instantiates ordered causal relations and how identity is preserved across total biological failure, you have promissory notes, not hypotheses.
  4. Fine-tuning leap
    Cosmic regularities may support a bare creator in the minimal deistic sense, but jumping to a fully specified, doctrine-laden deity smuggles extra content. That’s the Mickey move again: from droppings to a cartoon without evidential payment.
  5. Miracle testimony
    Testimony is frequently dependent, error-prone, and incentive-loaded. Treating it as cleanly independent data multiplies likelihoods illegitimately. If you won’t model dependence, you’re minting confirmation out of noise.
  1. Did you define H with clear content and no mixed-category mashups?
  2. Do you have a minimal interface sketch for how H touches the domain it’s supposed to affect?
  3. Can you say what would lower P(H) if observed?
  4. Did you reserve r = P(U) > 0 so you didn’t rig the menu?
  5. Are you using \mathrm{BF} = \frac{P(E \mid H)}{P(E \mid \neg H)} honestly, without shrinking P(E \mid \neg H) by imagination limits?
  6. Are you penalizing vagueness by comparing P(E \mid H) to rivals rather than celebrating any loose concordance?

When in doubt, write the comparison explicitly. If you are comparing H_1 to H_2 with an unknowns remainder U, keep this on one line and read it carefully:

\frac{P(H_1 \mid E)}{P(H_2 \mid E)} = \frac{P(H_1)}{P(H_2)} \times \frac{P(E \mid H_1)}{P(E \mid H_2)}].

And don’t forget the background complement while assessing the overall plausibility of H_1:

P(E \mid \neg H_1) = \alpha P(E \mid H_2) + \beta P(E \mid U)] where \alpha + \beta = 1 given your partition under \neg H_1.

Meaning: The chance of the evidence when H_1 is false is a weighted average of two live alternatives under \neg H_1: a named rival H_2 and an unknowns bucket U.

Weights: \alpha=P(H_2 \mid \neg H_1) and \beta=P(U \mid \neg H_1). They sum to one because H_2 and U are your whole partition of the “not H_1” world.

Why it matters: If you drop U by setting \beta=0, you make P(E \mid \neg H_1) too small and accidentally inflate support for H_1. Keeping U prevents rigging the denominator.

Example: If \alpha=0.7, P(E \mid H_2)=0.20, \beta=0.3, and P(E \mid U)=0.05, then P(E \mid \neg H_1)=0.7\times 0.20+0.3\times 0.05=0.155.

Takeaway: Always model \neg H_1 as a mixture of concrete rivals plus U; your likelihood for the complement should reflect both.

If your evaluation of P(E \mid U) is basically “I refuse to think about it,” you’re not doing inference; you’re curating outcomes.

➘ If redefining a term makes any disconfirming outcome still count, the claim is not evaluable.
When results go the wrong way and the response is to change what the key word means, you’ve moved the goalposts. A testable claim must lock its terms before the test and keep them fixed after. If “healed” meant lower fever within 24 hours, you can’t switch it to “felt spiritually better” once the data arrive. If “prophecy” meant X happening by date Y, it cannot become “a symbolic fulfillment” on date Z. Do this and you’ve guaranteed that nothing can count against the claim, which means nothing can honestly count for it either.

✓ Freeze definitions up front.
✓ Write one outcome that would clearly count against the claim and one that would count for it.
✓ If both outcomes are later reinterpreted as support, the claim is unevaluable by design.

➘ If adding detail never decreases P(H), you’re not risking prediction; you’re rehearsing a slogan.
Real hypotheses trade coverage for commitment. More detail should narrow the outcome space, which normally lowers the prior P(H) while potentially raising P(E \mid H) if the detail hits. If you keep stacking details but insist the prior never drops, you’re getting a free lunch: specificity without the cost. That is story-building, not inference.

✓ Each added detail should create at least one plausible observation that would count against H.
✓ If every layer is marketed as “compatible with anything,” it is decoration, not prediction.
✓ Watch the tradeoff: tighter P(E \mid H) should be paired with a lower P(H); otherwise you’re gaming the update.

➘ If your story keeps borrowing generic features of the world that all rivals also predict, it isn’t gaining traction.
Citing broad facts like “there is order,” “math works,” or “people have experiences” does not move the needle if rival hypotheses also expect them. In Bayesian terms, when P(E \mid H_1) \approx P(E \mid H_2), the Bayes factor is about 1, so the evidence is neutral. Treating shared-background facts as unique confirmations is padding, not progress.

✓ Name real comparators and ask what they actually predict about this specific evidence.
✓ Favor contrastive, risky predictions that your rivals would rate much lower, so \mathrm{BF} = \frac{P(E \mid H)}{P(E \mid \text{rival})} is meaningfully above 1.
✓ If the “evidence” would have looked the same under your main rivals, it isn’t support; it’s ambience.

  1. Take a popular claim you hear on campus and force it through the admissibility norm. Write exactly what would make you drop P(H).
  2. Pick a favored explanation and compute a toy Bayes factor against a decent rival, even if you must approximate P(E \mid \cdot). Then add an U term and see how the odds move.
  3. Rewrite a vague prediction into a sharp one. If the likelihood didn’t change, you probably kept it vague.
  4. Ask three friends to independently restate the evidence E. If the statements differ wildly, your likelihoods are fragile.

Doesn’t logical possibility guarantee a non-zero prior?
No. Logical possibility is cheap. Admissibility demands coherence with an interface and evaluability criteria. Until then, P(H) is undefined, not just tiny.

Isn’t U a cop-out?
It’s the opposite. U keeps you honest about ignorance and prevents premature foreclosure on live natural explanations. Refusing U is the real cop-out.

But my hypothesis is simple. Doesn’t simplicity win?
Simplicity helps only among admissible rivals with comparable fit to E. A simple incoherence is still incoherent.

What if H is beyond human comprehension?
Then it’s beyond evaluation and shouldn’t be in the pool. You can believe it as poetry if you like, but don’t claim evidential parity.

When you assess P(E \mid H), check three things on the spot:

✓ Specificity
Ask whether H actually rules things out. A specific claim sharply narrows which outcomes would count as success; a vague claim lets almost anything pass. “There will be an earthquake somewhere this year” is non-specific because nearly any outcome fits. “A quake of magnitude at least 6 within 50 km of Tokyo in October” is specific because most outcomes would falsify it. In Bayesian terms, specificity shows up as a larger gap between P(E \mid H) and the background rates, not as a post-hoc story that can hug any data. Quick test: list three concrete outcomes that would make H fail; if you cannot, H lacks specificity.

✓ Directionality
Good hypotheses stick their necks out: some plausible observations would push their probability down. That is directionality. If every outcome can be spun as support for H—success is a hit, failure is “hidden reasons,” null results are “subtle effects”—then H is self-sealing and unevaluable. A directional medical claim might say “patients who receive treatment T recover at least 10 percent faster than matched controls.” Now worse-than-controls outcomes would clearly lower P(H \mid E). A nondirectional claim like “treatment T works in mysterious ways” never loses; therefore it never honestly wins. Quick test: write down the outcome pattern that would count against H in advance; if none exists, you have a self-sealing claim.

✓ Comparators
Evidence favors H only relative to rivals. Name them. For a healing report, live competitors might include placebo response, natural remission, measurement error, fraud, misattribution, and the unknowns bucket U. The denominator of your Bayes factor needs those alternatives: P(E \mid \neg H) = \sum_i P(E \mid H_i) P(H_i \mid \neg H) + P(E \mid U) P(U \mid \neg H). If you ignore realistic H_i or drop U, you artificially shrink P(E \mid \neg H) and make H look stronger than it is. Quick test: list the top three concrete rivals plus U, give each a nonzero weight, and check whether the evidence clearly prefers H over those—not just over a strawman.

Testimony is evidence, but it’s often dependent, filtered, and incentivized. Don’t pretend n statements are n independent draws if they share sources, communities, or scripts. If you can’t model dependence precisely, at least bound it. That keeps your posterior from exploding just because you multiplied what shouldn’t be multiplied.

Draft two columns: admissible and inadmissible. Place candidate hypotheses in one column only after they meet coherence, interface, and evaluability. Then:

\text{Odds}(H:E) = \text{Odds}(H) \times \frac{P(E \mid H)}{P(E \mid \neg H)}.

Compute both terms honestly, with U included in \neg H. If your conclusion flips when you reintroduce U, you just caught prior inflation.

Admitting undefined, incoherent, or unevaluable claims into your hypothesis space is how you mint counterfeit probability. Enforce coherence, interface, and evaluability first. Keep an explicit unknowns reserve r = P(U) > 0. Compare rivals with real Bayes factors that don’t starve the denominator. Refuse to let clever phrases pose as live options. That’s how you stop epistemic inflation and keep your credences proportionate to the actual evidence.


A Deeper Look:


Recent posts

  • Alvin Plantinga’s “Warrant” isn’t an epistemic upgrade; it’s a design for inaccuracy. My formal proof demonstrates that maximizing the binary status of “knowledge” forces a cognitive system to be less accurate than one simply tracking evidence. We must eliminate “knowledge” as a rigorous concept, replacing it with credencing—the honest pursuit…

  • This article critiques the stark gap between the New Testament’s unequivocal promises of answered prayer and their empirical failure. It examines the theological “bait-and-switch” where bold pulpit guarantees of supernatural intervention are neutralized by “creative hermeneutics” in small groups, transforming literal promises into unfalsifiable, psychological coping mechanisms through evasive logic…

  • This article characterizes theology as a “floating fortress”—internally coherent but isolated from empirical reality. It details how specific theological claims regarding prayer, miracles, and scientific facts fail verification tests. The argument posits that theology survives only through evasion tactics like redefinition and metaphor, functioning as a self-contained simulation rather than…

  • This post applies parsimony (Occam’s Razor) to evaluate Christian Theism. It contrasts naturalism’s high “inductive density” with the precarious “stack of unverified assumptions” required for Christian belief, such as a disembodied mind and omni-attributes. It argues that ad hoc explanations for divine hiddenness further erode the probability of theistic claims,…

  • Modern apologists argue that religious belief is a rational map of evidence, likening it to scientific frameworks. However, a deeper analysis reveals a stark contrast. While science adapts to reality through empirical testing and falsifiability, theology insulates belief from contradictory evidence. The theological system absorbs anomalies instead of yielding to…

  • This post critiques the concept of “childlike faith” in religion, arguing that it promotes an uncritical acceptance of beliefs without evidence. It highlights that while children naturally trust authority figures, this lack of skepticism can lead to false beliefs. The author emphasizes the importance of cognitive maturity and predictive power…

  • This analysis examines the agonizing moral conflict presented by the explicit biblical command to slaughter Amalekite infants in 1 Samuel 15:3. Written from a skeptical, moral non-realist perspective, it rigorously deconstructs the various apologetic strategies employed to defend this divine directive as “good.” The post critiques common evasions, such as…

  • Modern Christian apologetics claims faith is based on evidence, but this is contradicted by practices within the faith. Children are encouraged to accept beliefs uncritically, while adults seeking evidence face discouragement. The community rewards conformity over inquiry, using moral obligations to stifle skepticism. Thus, the belief system prioritizes preservation over…

  • In the realm of Christian apologetics, few topics generate as much palpable discomfort as the Old Testament narratives depicting divinely ordered genocide. While many believers prefer to gloss over these passages, serious apologists feel compelled to defend them. They must reconcile a God described as “perfect love” with a deity…

  • This post examines various conditions Christians often attach to prayer promises, transforming them into unfalsifiable claims. It highlights how these ‘failsafe’ mechanisms protect the belief system from scrutiny, allowing believers to reinterpret prayer outcomes either as successes or failures based on internal states or hidden conditions. This results in a…

  • In public discourse, labels such as “atheist,” “agnostic,” and “Christian” often oversimplify complex beliefs, leading to misunderstandings. These tags are low-resolution summaries that hinder rational discussions. Genuine inquiry requires moving beyond labels to assess individual credences and evidence. Understanding belief as a gradient reflects the nuances of thought, promoting clarity…

  • The featured argument, often employed in Christian apologetics, asserts that the universe’s intelligibility implies a divine mind. However, a meticulous examination reveals logical flaws, such as equivocation on “intelligible,” unsubstantiated jumps from observations to conclusions about authorship, and the failure to consider alternative explanations. Ultimately, while the universe exhibits structure…

  • The piece discusses how historical figures like Jesus and Alexander the Great undergo “legendary inflation,” where narratives evolve into more than mere history, shaped by cultural needs and societal functions. As communities invest meaning in these figures, their stories absorb mythical elements and motifs over time. This phenomenon illustrates how…

  • This post argues against extreme views in debates about the historical Jesus, emphasizing the distinction between the theological narrative shaped by scriptural interpretation and the existence of a human core. It maintains that while the Gospels serve theological purposes, they do not negate the likelihood of a historical figure, supported…

  • Hebrews 11:1 is often misquoted as a clear definition of faith, but its Greek origins reveal ambiguity. Different interpretations exist, leading to confusion in Christian discourse. Faith is described both as assurance and as evidence, contributing to semantic sloppiness. Consequently, discussions about faith lack clarity and rigor, oscillating between certitude…

  • This post emphasizes the importance of using AI as a tool for Christian apologetics rather than a replacement for personal discernment. It addresses common concerns among Christians about AI, advocating for its responsible application in improving reasoning, clarity, and theological accuracy. The article outlines various use cases for AI, such…