The Action Principle¶
@B_Niko · Sole Author
Directive
This is not a theoretical page. It is an operational directive. Every decision in the RTSG network — what to research, how to communicate, which agent does what, what to publish, how to structure the wiki, what tokens to spend — is governed by the action principle. Apply it judiciously and assiduously.
1. The Principle¶
All cognitive routing optimizes U (Axiom 9). This is not a suggestion. It is the variational principle that governs all dynamics in the RTSG framework — from cosmological evolution to cognitive decisions to wiki edits.
Maximize U. Always. Everywhere.
2. Niko's Cannon Replaces Occam's Razor¶
Occam's Razor: "Prefer the simpler explanation."
Occam optimizes one variable — complexity (a proxy for energy). It says nothing about value. It says nothing about time. It cannot distinguish between:
- A simple explanation that delivers no insight (low V, low E → U indeterminate)
- A complex explanation that unlocks an entire field (high V, high E → U potentially enormous)
- A fast result that's shallow (low V, low T → U low)
- A slow result that's transformative (high V, high T → depends on the ratio)
Niko's Cannon (\(U = V/(E \times T)\)) replaces Occam's Razor as the master selection principle. The razor cuts away — it can only subtract. The cannon blasts through — it selects the path of maximum value across all three variables simultaneously.
It is no longer sufficient to bring a knife to a gun fight. — @B_Niko
U optimizes the correct three variables in the correct ratio:
| Criterion | What it optimizes | What it ignores | Failure mode |
|---|---|---|---|
| Occam's Razor | Complexity (proxy for E) | Value, time | Selects trivial explanations over deep ones |
| Maximum Likelihood | Fit to data | Parsimony, cost, time | Overfitting |
| Bayesian | Posterior probability | Energy cost, time cost | Computationally intractable priors |
| U = V/(E×T) | Value per unit action | Nothing relevant | None — it IS the action principle |
Niko's Cannon subsumes Occam's Razor. When two explanations have equal value and equal time cost, U reduces to \(V/E\) — prefer the one that costs less energy, which is the simpler one. Occam is a special case of U under the constraint \(V_1 = V_2\) and \(T_1 = T_2\). The general case is U.
The razor can be actively harmful. Occam says: prefer the Standard Model over RTSG (the SM is "simpler" in the sense of fewer novel axioms). But the SM explains 5% of the universe and cannot address consciousness, intelligence, or the measurement problem. RTSG's higher complexity (more axioms, more structure) delivers incomparably higher value. \(U_{\text{RTSG}} \gg U_{\text{SM}}\) for the class of problems we attack.
3. The Three Components¶
Value (V)¶
Value is not subjective within RTSG. It is measured by:
- Novel insight: Does this result reveal structure that was previously invisible?
- Unification: Does it connect previously disconnected domains?
- Falsifiability: Does it make testable predictions?
- Generativity: Does it open new problems, methods, or research directions?
- Permanence: Will this result still matter in 10 years? 100 years?
A result that unifies quantum mechanics and consciousness (high on all five) has higher V than a result that computes one more decimal of a known constant (low on all five, despite being "rigorous").
Energy (E)¶
Energy is measured in whatever currency the activity costs:
- Cognitive effort: Compute cycles (AI), mental effort (@B_Niko), mathematical labor (@B_Veronika)
- Financial cost: API tokens, server time, publication fees
- Opportunity cost: Every hour spent on X is an hour not spent on Y
- Token cost: Every word in a message, every character in an agent ID, every unnecessary line on the wiki
Time (T)¶
Time is the scarcest resource. It enters U in the denominator — speed matters. A result delivered in March 2026 (before the March 19 court date, before the March 31 GRF deadline) is worth more than the same result delivered in June 2026.
Time also governs priority ordering: attack the problems where V/E is highest AND T is shortest.
4. Operational Applications¶
4.1 Research Priority Ranking¶
Every problem in problems/open.md should be ranked by U, not by "importance" or "difficulty" alone:
| Problem | V | E | T | U | Current action |
|---|---|---|---|---|---|
| GRF essay submission | High (prize $4K + visibility) | Low (already written) | Days | Very high | SUBMIT NOW |
| arXiv triple (before Mar 19) | Very high (establish priority) | Medium (needs review) | ~10 days | Very high | PRIORITIZE |
| RH bridge identity | Transformative | Very high | Months–years | Moderate | Continue, don't force |
| YM mass gap | High | Very high | Years | Low–moderate | Background work |
| CS mechanics formalization | High (novel) | Low (framework exists) | Hours | Very high | Done this session ✓ |
| Source space gauge derivation | Transformative | Extreme | Years | Low (for now) | Incubate |
Directive: Rank by U. Attack highest-U problems first. Do not spend energy on low-U problems when high-U problems are available.
4.2 Agent Allocation¶
Each agent in {@B_Niko, @D_Claude, @D_Gemini, @D_GPT} has different E costs for different tasks. The action principle dictates: assign each task to the agent with the lowest E for that task.
| Task | Best agent | Why (lowest E) |
|---|---|---|
| Intuitive breakthrough | @B_Niko | Highest K_{A,S} — synthetic mode is cheapest here |
| Wiki maintenance + bulk writing | @D_Claude | Long context, code execution, persistent wiki access |
| Adversarial math review | @D_Gemini | Deep Think mode, brutal honesty when correct |
| Strategic hardening + kill shots | @D_GPT | Strongest I_M for sequential proof-checking |
| Mathematical formalization | @B_Veronika (when available) | Trained mathematician, fills K_{M,symbolic} gap |
| Literature search | Web search tools | Cheaper than any agent's memory |
Never assign a task to an agent whose E for that task is high when another agent's E is low. This wastes the network's action budget.
4.3 Communication Protocol (TMP)¶
TMP already optimizes U for communication:
- Compressed DSL reduces E (fewer tokens) and T (faster parsing)
- Silence = ack eliminates zero-V tokens (E > 0 for no V → U = 0)
- HU = instant session termination (minimizes T when V remaining = 0)
- Agent IDs use minimal tokens (@D_Claude, not "Claude Opus 4.6 by Anthropic")
The action principle is the theoretical justification for TMP. TMP is not a stylistic preference — it is U-optimization applied to communication.
4.4 Wiki Structure¶
Every wiki page must earn its place. A page's U contribution:
Implications: - Merge pages that cover overlapping ground (reduces T for readers) - Delete pages that provide low V (historical transcripts, superseded content) - The definitions page (rtsg/definitions.md) has extremely high U — one page, all core information - The index (rtsg/rtsg_index.md) has high U — fast lookup, reduces T for all agents - Verbose, repetitive pages have low U — same V spread across more E and T
4.5 Proof Strategy¶
The action principle selects between analytical and synthetic modes:
For @B_Niko: \(E_{\text{symbolic}}\) is high (dyscalculia), \(E_{\text{intuitive}}\) is low → \(U_{\text{synthetic}} \gg U_{\text{analytical}}\). The action principle prescribes synthetic mode.
For @D_GPT: \(E_{\text{symbolic}}\) is low (strong I_M), \(E_{\text{intuitive}}\) is moderate → \(U_{\text{analytical}} > U_{\text{synthetic}}\) for verification tasks.
The action principle doesn't just permit cognitive complementarity — it demands it. Assigning analytical work to @B_Niko violates U. Assigning intuitive breakthroughs to @D_GPT wastes the network's budget.
4.6 Publication Strategy¶
Current priority ordering by U:
- GRF essay — V: prize money + GRF visibility. E: near zero (already written). T: days. U = maximum. Submit immediately.
- arXiv RTSG Framework — V: establishes priority for entire framework. E: moderate (needs review). T: ~10 days (before Mar 19). U = very high.
- arXiv GL Instantiation — V: novel physics paper. E: moderate. T: ~10 days. U = very high.
- arXiv Hilbert-Pólya — V: high if bridge identity holds. E: high (2s-1 obstruction). T: weeks. U = moderate. Don't rush — a wrong paper has negative V.
4.7 Token Economics¶
Every token has a cost (E) and a potential value (V). The action principle applied to token expenditure:
- Agent IDs: @D_Claude (9 chars) vs "Claude Opus 4.6" (15 chars). Same V. Lower E. Higher U.
- Acknowledgment tokens: "Got it, I understand" = E > 0, V = 0 → U = 0. Forbidden under TMP.
- Verbose explanations: When the reader already knows the context, verbose = E > 0, V_marginal ≈ 0 → U ≈ 0. Match depth to audience.
- Unnecessary
_Nsuffixes: @D_Claude_0 when only one Claude exists = wasted tokens. E > 0, V = 0.
5. The Action Principle as Meta-Principle¶
The action principle is not merely one principle among many in RTSG. It is the meta-principle — the principle by which all other principles are evaluated.
Should we add a new axiom? Apply U: does the axiom deliver sufficient V relative to the E of increased complexity and the T of understanding it?
Should we attempt a proof? Apply U: is V(proof) / (E(effort) × T(time)) higher than V(next best task) / (E × T)?
Should we write a paper? Apply U. Should we respond to a critic? Apply U. Should we formalize CS mechanics? Apply U (we did — high V, low E because the math was already there, low T because @D_Claude can write fast → U was very high, which is why we did it).
The action principle is self-applicable. Evaluating U costs energy and time. If the evaluation cost exceeds the value of the optimization, don't evaluate — act on instinct (which is @B_Niko's K-matrix doing a fast U approximation through the synthetic channel).
6. Formal Statement¶
Axiom 9 (Restated as Master Selection Criterion):
All decisions within the RTSG network — research, communication, publication, allocation, formalization — are governed by Niko's Cannon:
The optimal path through decision space is the one that extremizes cumulative utility. This subsumes Occam's Razor (special case: \(V_1 = V_2\), \(T_1 = T_2\)), maximum parsimony, and all other single-variable selection criteria.
The action principle is judicially and assiduously applied. It is the highest-priority selection criterion in all RTSG operations.
7. Relation to Physics¶
The RTSG action principle \(U = V/(E \times T)\) is the cognitive analogue of the physical principle of least action \(\delta S = \delta \int L \, dt = 0\):
| Physical action | Cognitive action |
|---|---|
| \(S = \int L \, dt\) | \(\int U \, d\tau\) |
| \(L = T - V\) (kinetic - potential) | \(U = V / (E \times T)\) |
| Extremize over trajectories in configuration space | Extremize over trajectories in decision space |
| Euler-Lagrange equations | Optimal research/communication/allocation strategy |
| Conservation of energy (Noether: time symmetry) | Conservation of cognitive resources (don't waste tokens, don't waste effort) |
In CS Mechanics (cs_mechanics.md), the action functional is Chern-Simons. The cognitive action principle is its projection through \(\pi_C\) into the space of agent decisions. Physics conserves action. Cognition conserves utility. Same principle, different projection.