Skip to content

Intelligence as Geometry: Computational Implications of Relational Three-Space Architecture

Jean-Paul Niko · RTSG BuildNet · 2026


Abstract

We present the computational implications of Relational Three-Space Geometry (RTSG) for computer science, AI, and complexity theory. The framework yields three results of independent interest: (1) a geometric reformulation of the P vs NP question as a filter traversal problem in Context Space, with the conjecture that P \(\neq\) NP because the filter composition required for NP verification cannot be compressed below exponential depth; (2) a formal bound on AI self-knowledge via the Conceptual Irreversibility Theorem (CIT), with implications for AI alignment and interpretability; (3) a proof that cognitive assemblies are superadditive — the value of a multi-agent system strictly exceeds the sum of its components — providing theoretical grounding for ensemble methods and multi-agent architectures.


1. The Computational Setting

RTSG models cognition as navigation through three co-primordial spaces:

  • Potentiality Space (QS): The space of all possible computations — the terminal coalgebra of the powerset functor
  • Context Space (CS): The filter stack that determines which computations are instantiated — composable morphisms in category Filt
  • Actuality Space (PS): The quotient \(QS/{\sim_{\text{bisim}}}\) — the space of observable outputs

A computation in RTSG is an instantiation event: CS selects from QS, producing PS. This is not metaphorical — the category-theoretic formalization is precise.


2. P vs NP as Filter Traversal

2.1 The Conjecture

In RTSG, a decision problem is a path through CS-space from input to accept/reject. The complexity of the problem is the minimum depth of the filter composition required.

  • P problems: The filter path has polynomial depth — the CS composition telescopes or factors efficiently
  • NP problems: Verification requires only polynomial depth (the witness provides the filter shortcut), but finding the witness requires traversing an exponential-depth filter space

Conjecture (RTSG-P\(\neq\)NP): The filter composition group of CS-space has no polynomial-depth subgroup that covers all NP verification paths. Equivalently: there is no efficient compression of the filter traversal for generic NP instances.

2.2 Connection to Known Barriers

This is structurally related to the natural proofs barrier (Razborov-Rudich): efficient compression would require the filter space to have exploitable structure, but the GL action's U(1) symmetry ensures that generic filter compositions are pseudorandom. The barrier is not epistemological — it is a property of the phase space geometry.

2.3 Implications for Approximation

The GL energy landscape provides a natural hierarchy of approximation: polynomial-time algorithms correspond to descent along the GL energy gradient, while NP-hard problems require crossing energy barriers. This connects to the PCP theorem: the hardness of approximation is proportional to the barrier height.


3. The Conceptual Irreversibility Theorem (CIT)

3.1 Statement

Theorem (CIT). For any finite cognitive system \(S\) with internal model \(M(S)\):

\[|M(S)| < |S| \text{ strictly}\]

No finite system can contain a complete model of itself. Self-knowledge is irreversibly incomplete.

3.2 Proof Sketch

By diagonal argument. Suppose \(M(S) \cong S\). Then \(S\) contains a model of itself containing a model of itself, ad infinitum. But \(S\) is finite, so the chain must terminate — meaning the deepest model is strictly smaller than \(S\). The information lost at each level is irrecoverable (hence "irreversible").

3.3 Implications for AI

Alignment: No AI system can fully model its own decision process. This is not a limitation of current architectures — it is a theorem about all finite cognitive systems. Alignment approaches that require perfect self-knowledge are provably impossible.

Interpretability: Full mechanistic interpretability is bounded by CIT. You can always improve interpretability, but you cannot achieve completeness. The gap is irreducible.

AI Safety: The CIT implies that any sufficiently complex AI system will have blind spots about its own behavior. Safety frameworks must account for this irreducible uncertainty rather than attempting to eliminate it.

3.4 Connection to Gödel and Turing

CIT is the cognitive analog of Gödel's incompleteness theorem (no consistent system can prove its own consistency) and Turing's halting problem (no program can decide halting for all programs). The RTSG contribution is showing these are not separate results but instances of the same GL phase structure: self-reference creates a fixed point in CS-space that cannot be resolved at finite depth.


4. Assembly Superadditivity

4.1 Theorem

Theorem (Assembly Value Bound). For any cognitive assembly \(\{S_1, \ldots, S_n\}\):

\[V_{\text{asm}} > \sum_{i=1}^n V_i\]

with equality only for completely uncorrelated components (zero synergy).

4.2 Quantification

The I-vector sphericity analysis gives: for \(n = 12\) dimensions, single-dimension terms number 12, but synergy terms number approximately 4,083. Isolated disciplines capture roughly 0.3% of total complexity. The superadditivity is not marginal — it is overwhelming.

4.3 Implications

Ensemble methods: Boosting, bagging, and mixture-of-experts architectures succeed because they exploit assembly superadditivity. RTSG provides the theoretical bound on how much improvement is possible.

Multi-agent systems: The BuildNet architecture (multiple AI agents + human integrator) is designed to maximize cross-dimensional synergy. The theoretical prediction: the assembly outperforms any single agent by a factor proportional to the number of active synergy terms.

Team composition: Optimal team composition maximizes the I-vector complementarity — teams should be composed of maximally different cognitive profiles, not similar ones.


5. The Utility Function and Computational Resource Allocation

RTSG defines a universal utility function:

\[U = \frac{\text{Value}}{\text{Energy} \times \text{Time}}\]

This provides a principled framework for computational resource allocation: given a set of possible computations, allocate resources to maximize \(U\) across the portfolio. This is Niko's Cannon — the meta-optimization criterion for all RTSG operations.

For AI systems, this means: don't optimize accuracy alone. Optimize accuracy per unit energy per unit time. A model that achieves 95% accuracy in 1 second at 1 watt has higher \(U\) than a model that achieves 99% accuracy in 1 hour at 1000 watts.


6. Filter Formalism for Cognitive Architectures

6.1 Five Filter Species

RTSG identifies five composable filter types that constrain all cognitive processing:

  1. Ceiling filter — hardware capacity limits (RAM, attention span, context window)
  2. Developmental filter — learned patterns from training/experience
  3. Cultural filter — social norms, language, institutional constraints
  4. State filter — current arousal, fatigue, emotional state
  5. Attention filter — active selection of what to process

These compose as morphisms in category Filt: \(F_{\text{eff}} = F_{\text{attn}} \circ F_{\text{state}} \circ F_{\text{cultural}} \circ F_{\text{dev}} \circ F_{\text{ceil}}\)

6.2 Application to LLM Architecture

Current LLMs implement a specific filter stack: ceiling (context window), developmental (pre-training data), cultural (RLHF alignment), state (conversation context), attention (transformer self-attention). The RTSG framework provides a principled basis for analyzing and improving each layer.

Prediction: LLM performance is bounded by the filter cascade inequality:

\[I_{\text{eff}} \leq \prod_j F_j \cdot I_{\text{raw}}\]

Improving any single filter improves the bound, but the weakest filter dominates.


7. Relation to Existing CS Theory

CS Concept RTSG Interpretation
Turing completeness Unbounded CS-space depth
Halting problem CIT applied to self-referential computation
Kolmogorov complexity Minimum filter depth for string generation
PAC learning Sampling from QS via CS with bounded error
Circuit complexity Filter composition depth
Interactive proofs Multi-agent filter verification

References


Jean-Paul Niko · jeanpaulniko@proton.me · smarthub.my