If you can say “timeless cause” or “disembodied mind,” that doesn’t make it a live option in a serious inference. Words aren’t lottery tickets for probability. The habit of treating a cool-sounding description as if it were a real, testable possibility is epistemic inflation. It counterfeits probability by sneaking undefined or incoherent ideas into the hypothesis space and then cashing them out as if they earned a share of the prior. The remedy is straightforward: enforce admission rules, budget for the unknowns you haven’t imagined, and apply Bayes without rigging the denominator.

Admissibility Norm for any hypothesis H before you assign it a non-zero prior:

Coherence: Is H specified without contradiction, with clear truth-conditions you could, in principle, check?
Interface: If H claims effects in the physical world, is there even a schematic bridge for how it interacts with matter, time, or causation?
Evaluability: Can H be constrained by evidence, logic, or probability rather than hide behind mystery?

If a proposal fails these, it doesn’t deserve a slice of P(H). It isn’t “very small.” It’s not admissible. Treating non-admissible claims as if they merely have tiny priors is the core mistake; you’re still letting them into the game.

Do not force all probability mass into the few stories you can currently imagine. Keep an explicit reserve for unimagined mechanisms U. Use a protected remainder so familiar options don’t look inevitable simply because you silently excluded what you couldn’t name:

\sum_{i=1}^{k} P(H_i) \le 1 - r and r = P(U) > 0.

Plain meaning: do not spend your entire probability budget on the hypotheses you can currently name. Reserve a positive chunk r for an “unknowns” option U. That is why the total prior assigned to the listed hypotheses H_1,\dots,H_k must be at most 1 - r.

\sum_{i=1}^{k} P(H_i): the combined prior you give to each known competitor H_i.
r = P(U) > 0: the explicit probability for unimagined or not-yet-modeled explanations U; it must be strictly above zero.
✓ Why this matters: if r were set to 0, you would quietly exclude future discoveries and make a familiar hypothesis look inevitable simply because you pre-allocated all the mass to what you could name.

Example: if you keep r = 0.20, then your known hypotheses can sum to at most 1 - r = 0.80. If you currently have three options, you might assign P(H_1)=0.40, P(H_2)=0.25, P(H_3)=0.15, which totals 0.80, leaving 0.20 for U. This protects against overconfidence and prevents inflating a pet hypothesis by starving its real competition.

That r (reserve) acknowledges reality: future discoveries and currently unmodeled natural explanations exist. Without it, you inflate your pet hypotheses by starving their competition.

◉ CONCEIVABILITY-AS-POSSIBILITY
Imaginability is not admissibility. Being able to picture a “timeless cause” or “disembodied mind” does not earn the claim a seat at the table. A candidate must clear three gates: COHERENCE (no internal contradiction), INTERFACE (at least a thin story for how it touches the observable world), and EVALUABILITY (what would lower its credence). Until then, it does not get a prior. Write it this way so you don’t cheat: P(H)=\text{undefined until admissibility is shown, not merely tiny}.

◉ POSSIBILITY-BY-STIPULATION
Stipulating “omnipotence makes anything possible” deletes constraints instead of arguing through them. A move that works equally well no matter what we observe explains nothing, so it earns no prior. If every outcome would be counted as a hit, the Bayes factor is neutral: \mathrm{BF}=\frac{P(E \mid H)}{P(E \mid \neg H)}\approx 1.

◉ PROMISCUOUS PRIORS
Handing out non-zero priors to any sentence that parses crowds the hypothesis space with fictions. The order is ADMISSIBILITY FIRST, ALLOCATION SECOND. Keep an explicit reserve for unknowns so you don’t starve real competitors: \sum_{i=1}^{k} P(H_i) \le 1-r\ \text{ with }\ r=P(U)>0.

◉ EQUIVOCAL EVIDENCE
Data that fit many stories equally do not move the needle. When rivals expect the evidence to roughly the same degree, the evidence is nondiscriminating: P(E \mid H_1)\approx P(E \mid H_2), so updates should be small. Treating background facts like “there is order” as unique confirmations is padding, not progress.

◉ COLLAPSING THE ALTERNATIVE
You inflate support for your favorite by shrinking the complement likelihood. The complement is a mixture of live rivals plus the unknowns bucket: P(E \mid \neg H)=\sum_i P(E \mid H_i)P(H_i \mid \neg H)+P(E \mid U)P(U \mid \neg H). Dropping U or credible rivals makes P(E \mid \neg H) too small and nearly any H look confirmed.

Posterior odds equal prior odds times a Bayes factor:

\text{Odds}(H:E) = \text{Odds}(H) \times \mathrm{BF} with \mathrm{BF} = \frac{P(E \mid H)}{P(E \mid \neg H)}.

Inflation hack 1: minimize the complement likelihood by fiat. If you silently exclude U, you rig P(E \mid \neg H) to be tiny, so any H wins. Do it properly:

P(E \mid \neg H) = \sum_{i} P(E \mid H_i) P(H_i \mid \neg H) + P(E \mid U) P(U \mid \neg H).

P(E \mid \neg H): the chance of the evidence if the main hypothesis H is false.
\sum_i: add up contributions from each specific alternative H_i.
P(E \mid H_i): how well alternative H_i would produce the evidence.
P(H_i \mid \neg H): how plausible H_i is when H is not true (the weight for that alternative).
U: the “unknowns” bucket for explanations you have not named yet.
P(E \mid U)P(U \mid \neg H): the share of the evidence explained by those unknown possibilities.

Plain meaning: the chance of the evidence without H is a weighted average of how well all other admissible explanations—and the unknowns—predict it. If you ignore U, you risk making P(E \mid \neg H) too small and accidentally boosting H.

Inflation hack 2: treat generic matches as specific predictions. If H “predicts” broad states that many rivals also fit, then P(E \mid H) is not special. Penalize vagueness and reward specificity.

Inflation hack 3: ignore dependence. Re-using the same type of testimony as if it were independent multiplies noise. Correct odds need honest dependence accounting.

— Mice vs. Mickey

Droppings in the cupboard are evidence for ordinary mice because that’s the kind of trace real mice reliably leave. Jumping from droppings to Mickey Mouse swaps a plain, well-established category (common mice) for a storybook individual (a trademarked cartoon) that was never shown to be a real-world option. The evidential link is generic; the conclusion you’re drawing is hyper-specific. Until you first establish that “Mickey Mouse” is even a live, real-world candidate, the trace cannot count for him.

✓ What the evidence actually supports: small rodents exist here.
✓ What it does not yet support: a specific, highly embellished character.
✓ How to reason cleanly: upgrade only the minimal hypothesis the evidence directly predicts; add decorations (gloves, red shorts, personality, divinity-like attributes) only when new evidence earns them.

— Perpetual motion pitch

A sketch of a machine that “runs forever” is cheap; a physically possible device is expensive. A drawing does not clear the minimum gates of coherence (no contradictions with conservation laws), interface (clear account of how forces and energy flows work), and evaluability (measurements that would show failure). Treating the sketch as a serious rival before it passes those gates is epistemic inflation.

✓ What’s admissible: a mechanism with stated parts, known interactions, and measurable predictions that could be wrong.
✓ What’s not: “Imagine this clever wheel” with no workable path through known constraints.
✓ Clean rule: a diagram earns attention; only a coherent, testable model earns prior probability.

— The supernatural USB port

Saying “my app talks to another realm through an invisible, untestable interface” names a wish, not a hypothesis. Interfaces, by definition, specify how signals move from one domain to another and what you’d observe on each side when it works or fails. Without even a thin mechanism—inputs, outputs, timings, failure modes, and a protocol that others can probe—you have nothing to evaluate.

✓ Minimal bar for admissibility: specify what the interface takes in, what it emits, under which conditions, and what observation would count as a miss.
✓ Why that matters: if any outcome can be reinterpreted as “it still worked, just mysteriously,” the claim cannot lose and therefore never wins honestly.
✓ Practical takeaway: before you treat a “supernatural connection” as a live competitor, demand a concrete, testable handshake between realms—otherwise keep it out of the hypothesis pool.

  1. Disembodied mind
    Every observation ties consciousness to a physical or functional base. Until a coherent account of a substrate-free mind exists, “disembodied mind” isn’t admissible. Do not hand it P(H) > 0 just because the phrase parses. Passing a grammar check is not passing the coherence test.
  2. Hidden yet perfectly loving deity
    If perfect love plausibly entails robust availability for relationship, then pervasive hiddenness sits in tension with that property. Unless the compatibility is actually demonstrated, treating it as a settled “possibility” is inflation. You don’t get to waive the conflict away with a label and still claim evidential parity.
  3. Timeless causation and resurrection mechanics
    “Outside time” causes “in time” events, and dead tissue is restored with full identity. Before you show how a timeless entity instantiates ordered causal relations and how identity is preserved across total biological failure, you have promissory notes, not hypotheses.
  4. Fine-tuning leap
    Cosmic regularities may support a bare creator in the minimal deistic sense, but jumping to a fully specified, doctrine-laden deity smuggles extra content. That’s the Mickey move again: from droppings to a cartoon without evidential payment.
  5. Miracle testimony
    Testimony is frequently dependent, error-prone, and incentive-loaded. Treating it as cleanly independent data multiplies likelihoods illegitimately. If you won’t model dependence, you’re minting confirmation out of noise.
  1. Did you define H with clear content and no mixed-category mashups?
  2. Do you have a minimal interface sketch for how H touches the domain it’s supposed to affect?
  3. Can you say what would lower P(H) if observed?
  4. Did you reserve r = P(U) > 0 so you didn’t rig the menu?
  5. Are you using \mathrm{BF} = \frac{P(E \mid H)}{P(E \mid \neg H)} honestly, without shrinking P(E \mid \neg H) by imagination limits?
  6. Are you penalizing vagueness by comparing P(E \mid H) to rivals rather than celebrating any loose concordance?

When in doubt, write the comparison explicitly. If you are comparing H_1 to H_2 with an unknowns remainder U, keep this on one line and read it carefully:

\frac{P(H_1 \mid E)}{P(H_2 \mid E)} = \frac{P(H_1)}{P(H_2)} \times \frac{P(E \mid H_1)}{P(E \mid H_2)}].

And don’t forget the background complement while assessing the overall plausibility of H_1:

P(E \mid \neg H_1) = \alpha P(E \mid H_2) + \beta P(E \mid U)] where \alpha + \beta = 1 given your partition under \neg H_1.

Meaning: The chance of the evidence when H_1 is false is a weighted average of two live alternatives under \neg H_1: a named rival H_2 and an unknowns bucket U.

Weights: \alpha=P(H_2 \mid \neg H_1) and \beta=P(U \mid \neg H_1). They sum to one because H_2 and U are your whole partition of the “not H_1” world.

Why it matters: If you drop U by setting \beta=0, you make P(E \mid \neg H_1) too small and accidentally inflate support for H_1. Keeping U prevents rigging the denominator.

Example: If \alpha=0.7, P(E \mid H_2)=0.20, \beta=0.3, and P(E \mid U)=0.05, then P(E \mid \neg H_1)=0.7\times 0.20+0.3\times 0.05=0.155.

Takeaway: Always model \neg H_1 as a mixture of concrete rivals plus U; your likelihood for the complement should reflect both.

If your evaluation of P(E \mid U) is basically “I refuse to think about it,” you’re not doing inference; you’re curating outcomes.

➘ If redefining a term makes any disconfirming outcome still count, the claim is not evaluable.
When results go the wrong way and the response is to change what the key word means, you’ve moved the goalposts. A testable claim must lock its terms before the test and keep them fixed after. If “healed” meant lower fever within 24 hours, you can’t switch it to “felt spiritually better” once the data arrive. If “prophecy” meant X happening by date Y, it cannot become “a symbolic fulfillment” on date Z. Do this and you’ve guaranteed that nothing can count against the claim, which means nothing can honestly count for it either.

✓ Freeze definitions up front.
✓ Write one outcome that would clearly count against the claim and one that would count for it.
✓ If both outcomes are later reinterpreted as support, the claim is unevaluable by design.

➘ If adding detail never decreases P(H), you’re not risking prediction; you’re rehearsing a slogan.
Real hypotheses trade coverage for commitment. More detail should narrow the outcome space, which normally lowers the prior P(H) while potentially raising P(E \mid H) if the detail hits. If you keep stacking details but insist the prior never drops, you’re getting a free lunch: specificity without the cost. That is story-building, not inference.

✓ Each added detail should create at least one plausible observation that would count against H.
✓ If every layer is marketed as “compatible with anything,” it is decoration, not prediction.
✓ Watch the tradeoff: tighter P(E \mid H) should be paired with a lower P(H); otherwise you’re gaming the update.

➘ If your story keeps borrowing generic features of the world that all rivals also predict, it isn’t gaining traction.
Citing broad facts like “there is order,” “math works,” or “people have experiences” does not move the needle if rival hypotheses also expect them. In Bayesian terms, when P(E \mid H_1) \approx P(E \mid H_2), the Bayes factor is about 1, so the evidence is neutral. Treating shared-background facts as unique confirmations is padding, not progress.

✓ Name real comparators and ask what they actually predict about this specific evidence.
✓ Favor contrastive, risky predictions that your rivals would rate much lower, so \mathrm{BF} = \frac{P(E \mid H)}{P(E \mid \text{rival})} is meaningfully above 1.
✓ If the “evidence” would have looked the same under your main rivals, it isn’t support; it’s ambience.

  1. Take a popular claim you hear on campus and force it through the admissibility norm. Write exactly what would make you drop P(H).
  2. Pick a favored explanation and compute a toy Bayes factor against a decent rival, even if you must approximate P(E \mid \cdot). Then add an U term and see how the odds move.
  3. Rewrite a vague prediction into a sharp one. If the likelihood didn’t change, you probably kept it vague.
  4. Ask three friends to independently restate the evidence E. If the statements differ wildly, your likelihoods are fragile.

Doesn’t logical possibility guarantee a non-zero prior?
No. Logical possibility is cheap. Admissibility demands coherence with an interface and evaluability criteria. Until then, P(H) is undefined, not just tiny.

Isn’t U a cop-out?
It’s the opposite. U keeps you honest about ignorance and prevents premature foreclosure on live natural explanations. Refusing U is the real cop-out.

But my hypothesis is simple. Doesn’t simplicity win?
Simplicity helps only among admissible rivals with comparable fit to E. A simple incoherence is still incoherent.

What if H is beyond human comprehension?
Then it’s beyond evaluation and shouldn’t be in the pool. You can believe it as poetry if you like, but don’t claim evidential parity.

When you assess P(E \mid H), check three things on the spot:

✓ Specificity
Ask whether H actually rules things out. A specific claim sharply narrows which outcomes would count as success; a vague claim lets almost anything pass. “There will be an earthquake somewhere this year” is non-specific because nearly any outcome fits. “A quake of magnitude at least 6 within 50 km of Tokyo in October” is specific because most outcomes would falsify it. In Bayesian terms, specificity shows up as a larger gap between P(E \mid H) and the background rates, not as a post-hoc story that can hug any data. Quick test: list three concrete outcomes that would make H fail; if you cannot, H lacks specificity.

✓ Directionality
Good hypotheses stick their necks out: some plausible observations would push their probability down. That is directionality. If every outcome can be spun as support for H—success is a hit, failure is “hidden reasons,” null results are “subtle effects”—then H is self-sealing and unevaluable. A directional medical claim might say “patients who receive treatment T recover at least 10 percent faster than matched controls.” Now worse-than-controls outcomes would clearly lower P(H \mid E). A nondirectional claim like “treatment T works in mysterious ways” never loses; therefore it never honestly wins. Quick test: write down the outcome pattern that would count against H in advance; if none exists, you have a self-sealing claim.

✓ Comparators
Evidence favors H only relative to rivals. Name them. For a healing report, live competitors might include placebo response, natural remission, measurement error, fraud, misattribution, and the unknowns bucket U. The denominator of your Bayes factor needs those alternatives: P(E \mid \neg H) = \sum_i P(E \mid H_i) P(H_i \mid \neg H) + P(E \mid U) P(U \mid \neg H). If you ignore realistic H_i or drop U, you artificially shrink P(E \mid \neg H) and make H look stronger than it is. Quick test: list the top three concrete rivals plus U, give each a nonzero weight, and check whether the evidence clearly prefers H over those—not just over a strawman.

Testimony is evidence, but it’s often dependent, filtered, and incentivized. Don’t pretend n statements are n independent draws if they share sources, communities, or scripts. If you can’t model dependence precisely, at least bound it. That keeps your posterior from exploding just because you multiplied what shouldn’t be multiplied.

Draft two columns: admissible and inadmissible. Place candidate hypotheses in one column only after they meet coherence, interface, and evaluability. Then:

\text{Odds}(H:E) = \text{Odds}(H) \times \frac{P(E \mid H)}{P(E \mid \neg H)}.

Compute both terms honestly, with U included in \neg H. If your conclusion flips when you reintroduce U, you just caught prior inflation.

Admitting undefined, incoherent, or unevaluable claims into your hypothesis space is how you mint counterfeit probability. Enforce coherence, interface, and evaluability first. Keep an explicit unknowns reserve r = P(U) > 0. Compare rivals with real Bayes factors that don’t starve the denominator. Refuse to let clever phrases pose as live options. That’s how you stop epistemic inflation and keep your credences proportionate to the actual evidence.


A Deeper Look:


Recent posts

  • Hebrews 11:1 is often misquoted as a clear definition of faith, but its Greek origins reveal ambiguity. Different interpretations exist, leading to confusion in Christian discourse. Faith is described both as assurance and as evidence, contributing to semantic sloppiness. Consequently, discussions about faith lack clarity and rigor, oscillating between certitude…

  • This post emphasizes the importance of using AI as a tool for Christian apologetics rather than a replacement for personal discernment. It addresses common concerns among Christians about AI, advocating for its responsible application in improving reasoning, clarity, and theological accuracy. The article outlines various use cases for AI, such…

  • This post argues that if deductive proofs demonstrate the logical incoherence of Christianity’s core teachings, then inductive arguments supporting it lose their evidential strength. Inductive reasoning relies on hypotheses that are logically possible; if a claim-set collapses into contradiction, evidence cannot confirm it. Instead, it may prompt revisions to attain…

  • This post addresses common excuses for rejecting Christianity, arguing that they stem from the human heart’s resistance to surrendering pride and sin. The piece critiques various objections, such as the existence of multiple religions and perceived hypocrisy within Christianity. It emphasizes the uniqueness of Christianity, the importance of faith in…

  • The Outrage Trap discusses the frequent confusion between justice and morality in ethical discourse. It argues that feelings of moral outrage at injustice stem not from belief in objective moral facts but from a violation of social contracts that ensure safety and cooperation. The distinction between justice as a human…

  • Isn’t the killing of infants always best under Christian theology? This post demonstrates that the theological premises used to defend biblical violence collapse into absurdity when applied consistently. If your theology implies that a school shooter is a more effective savior than a missionary, the error lies in the theology.

  • This article discusses the counterproductive nature of hostile Christian apologetics, which can inadvertently serve the skepticism community. When apologists exhibit traits like hostility and arrogance, they undermine their persuasive efforts and authenticity. This phenomenon, termed the Repellent Effect, suggests that such behavior diminishes the credibility of their arguments. As a…

  • The post argues against the irreducibility of conscious experiences to neural realizations by clarifying distinctions between experiences, their neural correlates, and descriptions of these relationships. It critiques the regression argument that infers E cannot equal N by demonstrating that distinguishing between representations and their references is trivial. The author emphasizes…

  • The article highlights the value of AI tools, like Large Language Models, to “Red Team” apologetic arguments, ensuring intellectual integrity. It explains how AI can identify logical fallacies such as circular reasoning, strawman arguments, and tone issues, urging apologists to embrace critique for improved discourse. The author advocates for rigorous…

  • The concept of the Holy Spirit’s indwelling is central to Christian belief, promising transformative experiences and divine insights. However, this article highlights that the claimed supernatural benefits, such as unique knowledge, innovation, accurate disaster predictions, and improved health outcomes, do not manifest in believers. Instead, evidence shows that Christians demonstrate…

  • This post examines the widespread claim that human rights come from the God of the Bible. By comparing what universal rights would require with what biblical narratives actually depict, it shows that Scripture offers conditional privileges, not enduring rights. The article explains how universal rights emerged from human reason, shared…

  • This post exposes how Christian apologists attempt to escape the moral weight of 1 Samuel 15:3, where God commands Saul to kill infants among the Amalekites. It argues that the “hyperbole defense” is self-refuting because softening the command proves its literal reading is indefensible and implies divine deception if exaggerated.…

  • This post challenges both skeptics and Christians for abusing biblical atrocity texts by failing to distinguish between descriptive and prescriptive passages. Skeptics often cite descriptive narratives like Nahum 3:10 or Psalm 137:9 as if they were divine commands, committing a genre error that weakens their critique. Christians, on the other…

  • In rational inquiry, the source of a message does not influence its validity; truth depends on logical structure and evidence. Human bias towards accepting or rejecting ideas based on origin—known as the genetic fallacy—hinders clear thinking. The merit of arguments lies in coherence and evidential strength, not in the messenger’s…

  • The defense of biblical inerrancy overlooks a critical flaw: internal contradictions within its concepts render the notion incoherent, regardless of textual accuracy. Examples include the contradiction between divine love and commanded genocide, free will versus foreordination, and the clash between faith and evidence. These logical inconsistencies negate the divine origin…

  • The referenced video outlines various arguments for the existence of God, categorized based on insights from over 100 Christian apologists. The arguments range from existential experiences and unique, less-cited claims, to evidence about Jesus, moral reasoning, and creation-related arguments. Key apologists emphasize different perspectives, with some arguing against a single…