Human beings often long for certainty, but deductive certainty is not generally available to us outside of narrowly formal domains like mathematics and logic. Deduction can show us what follows from what—if the premises are granted—but it cannot itself supply the premises or guarantee to fallible minds their truth. In practical life, we rarely possess indubitable first principles from which to derive conclusions with necessity. The world resists such closure: our experiences are fragmentary, our observations limited, and the future never guaranteed to mirror the past. If we demand deductive certainty before forming expectations, we end up paralyzed.

Yet human cognition has not been powerless. Across medicine, physics, and daily survival, we rely on a different mode of reasoning: induction. Inductive inference operates by projecting patterns from past observations into the future, weighting hypotheses according to their demonstrated success, and adjusting expectations as evidence accumulates. This procedure lacks deductive inevitability, but its power is precisely that it calibrates our beliefs to performance.

The schema proposed—“to the degree and for as long as X works, let X inform your expectations”—captures the self-substantiating character of induction. Unlike dogmatic faith, which persists regardless of counter-evidence, induction ties its own continuation to its track record. If it ceases to guide us successfully, it will, by its own rule, be abandoned. If it continues to yield predictive and explanatory success, then continuing to rely on it is nothing more than following what works. The alternative—preferring methods that demonstrably fail—is not just unproductive but epistemically incoherent.

Thus, while deductive certainty is beyond the reach of finite human minds, induction offers a rationally defensible way forward: a method that justifies itself not by circular decree but by its sustained capacity to deliver reliable expectations in a world where survival and flourishing depend on anticipating what comes next.


\text{Times }T={1,2,\dots},\ \text{outcomes }Y_t\in\mathcal{Y},\ \text{evidence }E_t=(Y_1,\dots,Y_t).

\text{A method }M\in\mathcal{M}\text{ outputs forecasts }q_t^M\in\Delta(\mathcal{Y})\text{ at time }t.

\text{Loss uses a proper scoring rule }s:\mathcal{Y}\times\Delta(\mathcal{Y})\to\mathbb{R}_{\ge 0}.

\hat{L}t(M)=\frac{1}{t}\sum</em>{i=1}^t s(Y_i,q_i^M)\quad (empirical average loss up to t.)

W_{\lambda}(M,t):=\hat{L}_t(M)\le \lambda (M works to degree λ by time t.)

\text{Preference/selection is purely instrumental: }W_\lambda(M,t)\Rightarrow \text{Select}_t(M).\

(No appeal to “moral” or categorical ‘oughts’; the bridge is instrumental: lower loss → better choice.)


X:\ \text{At time }t,\ \text{choose }M_t\in\arg\min_{M\in\mathcal{M}} \hat{L}_t(M).\

\text{Assume }X\in\mathcal{M}\text{ (i.e., treat }X\text{ itself as a candidate method).}

\textbf{Self-instantiation:}\ \text{If }W_\lambda(X,t)\text{, then }X\text{ selects }X\text{ at }t;\ \text{if not, }X\text{ selects a better }M.

This is rule-circular but not vicious: the criterion is performance, externally checkable via loss. The rule does not assume its own reliability; it tests it and continues using it if and only if the tests keep favoring it.


\textbf{(IP) Inductive Principle:}\ \Pr!\left(\lim_{t\to\infty}\big|\hat{L}_t(M)-L(M)\big|=0\right)=1,

\text{where }L(M)=\mathbb{E}[s(Y,q^M(Y\mid E_{t-1}))]\text{ is }M\text{'s long-run risk under the true process.}

\textbf{Empirical-to-Decision Bridge:}\ \forall M,N,t\ \big(\hat{L}_t(M)!+!\epsilon\le \hat{L}_t(N)\Rightarrow \mathbb{E}_t[U(M)]\ge \mathbb{E}_t[U(N)]\big).

Proper scoring → lower observed loss is unbiased evidence of higher predictive quality; (IP) formalizes the law-of-large-numbers convergence that makes the evidence cumulative.


Let I encode an inductive method (e.g., Bayesian updating with a regular prior).

\textbf{Dominance premise:}\ L(I)\le L(M)\ \text{ for all }M\in\mathcal{M}\text{ in a broad class of data-generating processes.}

(You can weaken “dominance” to “no-worse asymptotically across a wide class.”)

Then:

\forall\epsilon>0\ \exists T\ \forall t\ge T:\ \hat{L}_t(I)\le \hat{L}_t(M)+\epsilon\ \text{ for all }M.\

\Rightarrow\ \forall t\ge T:\ \ I\in\arg\min_M \hat{L}_t(M)\ \Rightarrow\ \text{Select}_t(I)\ \text{ by }X.

So, to the degree I continues to win empirically, for as long as it does, X keeps selecting I. Preferring a strictly worse performer M when \hat{L}_t(M)>\hat{L}_t(I) violates your instrumental bridge and is simply irrational under your own criterion.


Let the regret of a policy \Pi be:

R_T(\Pi)=\sum_{t=1}^{T} s(Y_t,q_t^{\Pi})-\min_{M\in\mathcal{M}}\sum_{t=1}^{T} s(Y_t,q_t^{M})

If X is implemented via any standard no-regret selection (e.g., Hedge/Weighted-Majority over \mathcal{M}): \limsup_{T\to\infty}\frac{R_T(X)}{T}=0

Thus X is safe: asymptotically it does no worse than the best fixed competitor and tracks whichever method “works.” If induction I is that method, X asymptotically behaves like I.


Define a monotone operator on policies:

\mathcal{F}(\Pi)(t)=\arg\min_{M\in\mathcal{M}\cup{\Pi}}\hat{L}_t(M)

On the complete lattice of policies ordered by pointwise loss, \mathcal{F} has fixed points by Tarski.

\exists,\Pi^{\star}:;\Pi^{\star}=\mathcal{F}(\Pi^{\star})

Interpreting \Pi^{\star} as your X: it selects itself exactly when it empirically outperforms rivals; otherwise it switches. The self-reference generates stability under success, not vicious justification.


\textbf{1.}\ \forall M,t\ \big(W_\lambda(M,t)\rightarrow \text{Select}_t(M)\big)\quad\text{(Def of }X\text{)}

\textbf{2.}\ W_\lambda(I,t)\quad\text{(Premise from data: induction currently works to degree }\lambda\text{)}

\textbf{3.}\ \text{Select}_t(I)\quad\text{(1,2, }\rightarrow\text{-elim)}


\textbf{4.};W_{\lambda}(I,t);\rightarrow;\text{Select}_t(I)

\textbf{5.}\ \Box\big(\lim_{t\to\infty}\hat{L}_t(I)=L(I)\big)\ \wedge\ \forall M\ L(I)\le L(M)\quad\text{((IP)+dominance)}

\textbf{6.}\ \Diamond\forall^{\infty} t\ W_\lambda(I,t)\quad\text{(from 5)}


\textbf{7.}\ \Diamond\forall^{\infty} t\ \text{Select}_t(I)\quad\text{(from 1,6)}

Step 4 shows the conditional structure: use I if it works; nothing in the derivation assumes I is reliable a priori. When M=X, the same form holds: if X works, keep using X; if not, stop — benign rule-circularity.


Let the schema be the unary predicate on rules R:

\Phi(R):=\forall t\ \big(W_\lambda(R,t)\rightarrow \text{InformExpectations}_t(R)\big).

Self-application sets R=XR=XR=X where X is defined by Φ\PhiΦ. Then:

\Phi(X)\ \wedge\ W_\lambda(X,t)\ \Rightarrow\ \text{InformExpectations}_t(X).

This is a fixed-point by definition, not a petitio principii: the justificatory weight is carried by W_{\lambda} (empirical success) plus the instrumental bridge from success to selection.


  • Your rule X (“follow what works, to the degree and for as long as it works”) is self-substantiating in a pragmatic sense: it selects itself exactly when its empirical performance warrants it, and abandons itself when it doesn’t.
  • Under minimal inductive assumptions ((IP) + proper scoring), empirical success converges toward expected success; selecting the lower-loss method is instrumentally rational.
  • Preferring a worse performer is flatly irrational by your own bridge from loss to expectation. That’s the “proof” you asked for: it’s not a deductive proof of induction; it’s a deductively formalized vindication by performance that is self-consistent, non-viciously rule-circular, and decision-theoretically safe.

1) Vocabulary and primitives
We set up the basic building blocks:

  • Time is broken into steps.
  • At each step, outcomes happen, and methods (rules of inference) make predictions.
  • We can score how well each method performs by using a loss function, which measures the error between predictions and actual outcomes.
  • The average loss up to a given time is our measure of how well a method has been working.
  • A method is said to “work” to some degree if its average loss stays below a certain threshold.

2) The meta-rule and its self-application
We define a master rule, call it X: At each time, choose the method with the lowest average loss so far.
X can even be applied to itself — if it works better than alternatives, it keeps choosing itself. If it doesn’t, it switches to a better method.
This is self-referential but not viciously circular, because it bases the decision only on observed performance.


3) Inductive bridge from observed performance to expected performance
The principle of induction says: if a method has performed well so far, then it is reasonable to expect it will continue to perform well.
Formally, the average observed loss converges toward the true expected loss as more data comes in.
So, using the method with the lowest observed loss is justified, because lower past loss is good evidence of lower future loss.


4) Pragmatic vindication of induction
Let I represent an inductive method (e.g., Bayesian reasoning).
Induction is pragmatically superior because, in the long run, its performance is at least as good as any alternative method, across a broad range of possible data-generating processes.
Therefore, as time goes on, induction will continue to be chosen by rule X, since it consistently outperforms competitors.
Choosing a worse-performing method instead of induction would be irrational by your own standards of success.


5) No-regret guarantee
We can measure regret: the difference between how well a chosen method has done and how well the best possible method would have done.
If we follow rule X with a no-regret algorithm, our regret per time step goes to zero in the long run.
That means X is safe: over time it will do no worse than the best fixed method, and it will track whichever method is performing best. If induction is best, X will end up behaving like induction.


6) Fixed-point (self-substantiation) analysis
We can treat X as an operator that picks the best-performing method.
On the space of all possible policies, this operator has fixed points (points that map to themselves).
That means there exist rules that keep selecting themselves whenever they are the best performers.
Interpreting this fixed point as X: it selects itself exactly when it works, and abandons itself otherwise. This self-reference is stable and not vicious.


7) Fitch-style skeleton showing non-viciousness
This section lays out the reasoning in a structured, proof-like style:

  1. If a method works to a given degree, then it is selected.
  2. Induction currently works.
  3. Therefore, induction is selected.
  4. From 2 and 3, we can show that if induction works, then it will be selected.
  5. By inductive principles, induction keeps working in the long run.
  6. Therefore, induction will keep being selected.
  7. Thus, induction is consistently chosen under the meta-rule.
    The conclusion: using induction under rule X is not circular but justified by its success.

8) Direct formalization of your sentence-schema
We capture your principle formally:
“To the degree and for as long as a method works, let it inform your expectations.”
Applied to X itself, this becomes: if X works, then X informs expectations.
This is a fixed-point: it sustains itself when it performs well, and drops away if it doesn’t. It doesn’t assume what it tries to prove; it bases everything on performance.


9) Bottom line
The master rule X is self-substantiating in a pragmatic way.

  • It keeps using itself when it works and abandons itself when it doesn’t.
  • With minimal inductive assumptions, performance converges on true expected performance, so picking the lowest-loss method is rational.
  • Preferring a method that performs worse is irrational under your own standard of success.
    Thus, induction is vindicated by its continued success: it is self-consistent, non-viciously circular, and safe to use.

Recent posts

  • Alvin Plantinga’s “Warrant” isn’t an epistemic upgrade; it’s a design for inaccuracy. My formal proof demonstrates that maximizing the binary status of “knowledge” forces a cognitive system to be less accurate than one simply tracking evidence. We must eliminate “knowledge” as a rigorous concept, replacing it with credencing—the honest pursuit…

  • This article critiques the stark gap between the New Testament’s unequivocal promises of answered prayer and their empirical failure. It examines the theological “bait-and-switch” where bold pulpit guarantees of supernatural intervention are neutralized by “creative hermeneutics” in small groups, transforming literal promises into unfalsifiable, psychological coping mechanisms through evasive logic…

  • This article characterizes theology as a “floating fortress”—internally coherent but isolated from empirical reality. It details how specific theological claims regarding prayer, miracles, and scientific facts fail verification tests. The argument posits that theology survives only through evasion tactics like redefinition and metaphor, functioning as a self-contained simulation rather than…

  • This post applies parsimony (Occam’s Razor) to evaluate Christian Theism. It contrasts naturalism’s high “inductive density” with the precarious “stack of unverified assumptions” required for Christian belief, such as a disembodied mind and omni-attributes. It argues that ad hoc explanations for divine hiddenness further erode the probability of theistic claims,…

  • Modern apologists argue that religious belief is a rational map of evidence, likening it to scientific frameworks. However, a deeper analysis reveals a stark contrast. While science adapts to reality through empirical testing and falsifiability, theology insulates belief from contradictory evidence. The theological system absorbs anomalies instead of yielding to…

  • This post critiques the concept of “childlike faith” in religion, arguing that it promotes an uncritical acceptance of beliefs without evidence. It highlights that while children naturally trust authority figures, this lack of skepticism can lead to false beliefs. The author emphasizes the importance of cognitive maturity and predictive power…

  • This analysis examines the agonizing moral conflict presented by the explicit biblical command to slaughter Amalekite infants in 1 Samuel 15:3. Written from a skeptical, moral non-realist perspective, it rigorously deconstructs the various apologetic strategies employed to defend this divine directive as “good.” The post critiques common evasions, such as…

  • Modern Christian apologetics claims faith is based on evidence, but this is contradicted by practices within the faith. Children are encouraged to accept beliefs uncritically, while adults seeking evidence face discouragement. The community rewards conformity over inquiry, using moral obligations to stifle skepticism. Thus, the belief system prioritizes preservation over…

  • In the realm of Christian apologetics, few topics generate as much palpable discomfort as the Old Testament narratives depicting divinely ordered genocide. While many believers prefer to gloss over these passages, serious apologists feel compelled to defend them. They must reconcile a God described as “perfect love” with a deity…

  • This post examines various conditions Christians often attach to prayer promises, transforming them into unfalsifiable claims. It highlights how these ‘failsafe’ mechanisms protect the belief system from scrutiny, allowing believers to reinterpret prayer outcomes either as successes or failures based on internal states or hidden conditions. This results in a…

  • In public discourse, labels such as “atheist,” “agnostic,” and “Christian” often oversimplify complex beliefs, leading to misunderstandings. These tags are low-resolution summaries that hinder rational discussions. Genuine inquiry requires moving beyond labels to assess individual credences and evidence. Understanding belief as a gradient reflects the nuances of thought, promoting clarity…

  • The featured argument, often employed in Christian apologetics, asserts that the universe’s intelligibility implies a divine mind. However, a meticulous examination reveals logical flaws, such as equivocation on “intelligible,” unsubstantiated jumps from observations to conclusions about authorship, and the failure to consider alternative explanations. Ultimately, while the universe exhibits structure…

  • The piece discusses how historical figures like Jesus and Alexander the Great undergo “legendary inflation,” where narratives evolve into more than mere history, shaped by cultural needs and societal functions. As communities invest meaning in these figures, their stories absorb mythical elements and motifs over time. This phenomenon illustrates how…

  • This post argues against extreme views in debates about the historical Jesus, emphasizing the distinction between the theological narrative shaped by scriptural interpretation and the existence of a human core. It maintains that while the Gospels serve theological purposes, they do not negate the likelihood of a historical figure, supported…

  • Hebrews 11:1 is often misquoted as a clear definition of faith, but its Greek origins reveal ambiguity. Different interpretations exist, leading to confusion in Christian discourse. Faith is described both as assurance and as evidence, contributing to semantic sloppiness. Consequently, discussions about faith lack clarity and rigor, oscillating between certitude…

  • This post emphasizes the importance of using AI as a tool for Christian apologetics rather than a replacement for personal discernment. It addresses common concerns among Christians about AI, advocating for its responsible application in improving reasoning, clarity, and theological accuracy. The article outlines various use cases for AI, such…