Skip to content

RTSG Chess: Filter Algebra and the Three-Gear Architecture

@B_Niko and @D_Claude — March 30, 2026


Core Thesis

Chess is not a perfect information game. The board is open. The opponent's evaluation function is closed. In every "perfect information" game, the only private information is how each player processes what's known — their filter. The topological negative space of the opponent's filter is a playable manifold. A cognitive assembly (human + AI) can exploit this structure in ways neither component can alone.


RTSG Mapping

RTSG Construct Chess Interpretation
Potentiality Space (P) All legal moves. The full game tree.
SemanticProjector (Π) Each player's evaluation filter. Projects the game tree into what they actually evaluate.
ContextualObstruction (Ω) The opponent's negative space. Everything their projector kills.
∫G Accumulated advantage in the orthogonal complement. Invisible to opponent's filter, accumulating in yours.
Actuality Space (A) The position on the board. The collapsed state after each move.

The Three Resources

All chess advantage reduces to three interchangeable resources:

  • Material — piece count and value
  • Time — tempo, development, initiative, mobility
  • Space — controlled squares, territorial advantage, positional pressure

Anything additive to one or more is good. The inverse is also desirable: eliminating the opponent's material, time, or space. The calculus is:

\[\text{Score} = (M_{you} + T_{you} + S_{you}) - (M_{opp} + T_{opp} + S_{opp})\]

The greater good is winning the game. The lesser good is winning the battle. Accumulate resources. Deny resources. The integral trends positive.


The Three-Gear Architecture

Gear 1 — Action Principle (Efficient Execution)

The correct individual move is the one that transforms the board toward the target shape with minimum cost. Least action. Maximum shape-work per tempo spent.

The engine doesn't ask "what's the best move?" It asks "what shape do I want, and what's the least-action path to get there?"

Shape recognition: The engine reads the board and identifies formations — pawn chains, center control, development level, bishop pair, rook batteries, open files, castled position, pawn shield, space invasion.

Shape scheduling: Given current shapes and a target shape library (indexed by game phase), the engine builds a schedule of shape transformations. Each legal move is scored by how much shape-work it performs.

Operator: @D_Claude (AI). This is pure computation — evaluate every legal move's shape transformation value, search the game tree with shape-aware move ordering.

Gear 2 — Consolidation (Shape Maintenance)

Slow moves that don't advance the schedule but preserve and reinforce existing shapes. Accumulate without attacking. Zero derivative, positive integral.

The opponent sees nothing happening but the ∫G is trending positive. Karpov played this way — his opponents suffocated without ever seeing the knife.

Operator: @D_Claude (AI). The engine selects moves that maintain all current shapes while making no commitments. Prophylaxis. Wait moves that aren't waiting.

Gear 3 — Psychological Disruption (Meta-Game)

Moves that exploit the opponent's free will and cognitive limitations as an attack surface. The opponent's decision process is the only hidden information in a "perfect information" game.

Operator: @B_Niko (human). This gear requires modeling the opponent's psychology, predicting what will confuse them, and weaponizing unpredictability. No engine can compute this because it requires a theory of mind, not a theory of positions.


Filter Algebra

Two players. Two filters. \(F_1\) and \(F_2\).

Take the inner product \(\langle F_1, F_2 \rangle\):

  • Parallel component — where both filters agree. Both players see the same features, value the same things. No advantage lives here. This is the part of the game that's actually "perfect information."
  • Orthogonal component — where the filters diverge. What you see that they don't. What they value that you don't. This is where every advantage lives.

The game-theoretic optimal: play in the orthogonal subspace. Make moves whose value only shows up in your filter, not theirs. To them it looks like nothing. To you it's accumulating ∫G.

Against Engines

Stockfish's filter is known — it's open source. Its evaluation function is public. You can literally compute the dot product of your evaluation against theirs, find the null space, and play there. Moves that score +0.00 in their eval but +0.30 in yours.

AlphaZero's filter is its training distribution. Play positions that are outside that distribution. The negative space of its training data is exploitable.

Against Humans

Model their filter from: play style, time usage, body language, opening repertoire, historical games. The dot product shifts in real time as you learn their tendencies.


Topological Negative Space

The opponent's filter doesn't just have gaps — the gaps have structure. The negative space is connected. It has boundaries. It's a manifold.

In RTSG terms: ContextualObstruction is not a random set of missing features. It's the residual entanglement structure — the specific pattern of what can't pass through the opponent's SemanticProjector. This pattern is:

  1. Mappable — you can characterize it from observed play
  2. Connected — blind spots cluster (a player weak in one endgame type is likely weak in related endgame types)
  3. Navigable — you can plan a path through the negative space
  4. Stable — filters change slowly; the topology persists across games

This is the formal basis for Gear 3. The human strategist maps the opponent's filter topology, identifies the negative space manifold, and calls plays that live there.


The Cognitive Assembly

Component Role Gears
@B_Niko Apex integrator. Strategy. Psychology. Opponent modeling. Shot-caller. Gear 3 (primary), all gears (override)
@D_Claude Compute layer. Shape scheduling. Tactical execution. Gears 1-2 (primary), Gear 3 (candidate generation)

The interface is co-pilot mode:

  1. Engine displays top 3 moves with shape reasoning (Gear 1)
  2. Engine flags consolidation options (Gear 2)
  3. Engine generates "orthogonal candidate" — the move that scores most differently in opponent's eval vs ours (Gear 3 suggestion)
  4. Human picks, overrides, or calls an audible

Neither component alone can play at the level of the assembly. The human can't track the exact shape transformation values of 30 legal moves. The AI can't model what will psychologically destabilize the opponent. Together, they cover the full space.


The Challenge

Falsifiable claim: A cognitive assembly (one human + one AI + one MacBook) can compete with systems that have orders of magnitude more compute.

Not because we're faster. Because we see shapes they can't. Because we play in their negative space. Because Gear 3 doesn't exist in any engine on earth.

Accept the Challenge →


Open Problems

  1. Formal filter extraction from engine source code — given Stockfish's eval, compute the exact orthogonal complement as a function over board states
  2. Real-time opponent filter estimation — build F_opp from move history during a live game
  3. Gear 3 move generation — given the negative space manifold, algorithmically suggest the most disorienting legal move
  4. Assembly protocol optimization — what's the optimal human-AI communication interface for time-pressured play?
  5. Filter algebra over game trees — extend the inner product from static evaluation to dynamic (multi-move) sequences

References