PRISM Engine API Specification¶
Jean-Paul Niko · RTSG v8 · 2026-03-17
Overview¶
PRISM adds three new endpoint families to the RTSG Intelligence Engine, connecting the existing filter algebra, Fourier, and intelligence infrastructure into a bidirectional document processing pipeline.
New Endpoints¶
Projection (NLP → Semantic Manifold)¶
POST /engine/prism/project
Body: { text: string, granularity?: "sentence" | "paragraph" | "document" }
Returns: { manifold_coords: number[][], tokens: string[], kg_anchors: string[] }
Maps natural language to coordinates on the semantic manifold using the knowledge graph as backbone. Each sentence/paragraph gets a position vector. The kg_anchors field returns the nearest nouns in the KG for each text segment.
Decomposition (Forward Pipeline)¶
POST /engine/prism/decompose
Body: { text: string, filters?: string[], granularity?: "sentence" | "paragraph" | "document" }
Returns: {
layers: [
{ filter: "formal", magnitude: number, segments: { index: number, weight: number }[] },
{ filter: "narrative", magnitude: number, segments: ... },
{ filter: "affective", magnitude: number, segments: ... },
{ filter: "empirical", magnitude: number, segments: ... },
{ filter: "meta", magnitude: number, segments: ... }
],
profile: number[],
spectral: { frequencies: number[], power: number[] }
}
Full spectral decomposition of a document into filter layers. Each layer contains the magnitude (total energy in that filter band) and per-segment weights. The profile is the document's filter signature vector. The spectral field contains the raw FFT output.
Pipeline: text → /prism/project → /filter/apply (×5) → /fourier/spectrum → response
Recomposition (Reverse Pipeline)¶
POST /engine/prism/recompose
Body: {
text: string,
target_profile: number[] | string,
preserve_content: boolean
}
Returns: { output: string, fidelity_score: number, profile_achieved: number[] }
Apply a target filter profile to content. target_profile is either a weight vector or a named preset ("companion", "mentor", "nurturer", "challenger", "storyteller", "analyst"). fidelity_score measures information preservation (1.0 = perfect, 0.0 = total loss).
Pipeline: text → decompose → reweight → inverse_filter → render
Identity Extraction¶
POST /engine/prism/identity
Body: { corpus: string[], window?: number }
Returns: {
fingerprint: number[],
variance: number[],
trajectory: number[],
stable_components: number[],
volatile_components: number[],
dominant_filter: string,
intelligence_vector: number[]
}
Extract cognitive biometric from a corpus of messages. window is the number of recent messages for trajectory computation. The intelligence_vector maps the fingerprint to the 8+ dimensional RTSG intelligence space.
Pipeline: corpus → decompose (×N) → average → variance → SDE trajectory → response
Compatibility Score¶
GET /engine/prism/compatibility?a=FINGERPRINT_ID&b=FINGERPRINT_ID
GET /engine/prism/compatibility?learner=FINGERPRINT_ID&material=DOCUMENT_ID
Returns: {
kappa: number,
feasible: boolean,
optimal_path?: { waypoints: string[], filters: number[][] },
mismatch_dimensions: number[]
}
K-matrix compatibility between two fingerprints or between a learner and material. If feasible is false (below threshold), optimal_path contains the geodesic through strong dimensions.
Presets¶
GET /engine/prism/presets
Returns: { presets: { name: string, profile: number[], description: string }[] }
POST /engine/prism/preset
Body: { name: string, profile: number[], description?: string }
Returns: { status: "ok", preset: ... }
Named filter presets. Ships with 6 defaults (companion, mentor, nurturer, challenger, storyteller, analyst). Users can create custom presets.
Integration with Existing Endpoints¶
| PRISM Component | Existing Endpoint | Connection |
|---|---|---|
| Projection → KG | GET /engine/neighbors?noun=X | Text segments anchor to nearest KG nouns |
| Filter application | POST /engine/filter/apply | Five Grothendieck filters applied per layer |
| Spectral analysis | POST /engine/fourier/spectrum | FFT on the graph signal of projected text |
| Intelligence mapping | GET /engine/intelligence?noun=X | Fingerprint maps to intelligence dimensions |
| K-matrix | GET /engine/intelligence/system | Compatibility tensor for identity pairs |
| Graph signal | GET /engine/fourier/graphSignal | Spectral analysis of KG subgraph per document |
| Phase prediction | POST /engine/phase/predict | Temporal evolution of filter fingerprint |
TMP Commands¶
PJ:text (project text to manifold)
PD:text (decompose into filter layers)
PR:text:preset (recompose through preset)
PI:corpus_id (extract identity)
PK:id_a:id_b (compatibility score)
PP (list presets)
Implementation Notes¶
NLP-to-Manifold Projection¶
The projection \(\pi_M\) maps tokenized text to the semantic manifold. Algorithm:
- Tokenize input text (sentence/paragraph level)
- Embed each token using the KG: find the nearest noun(s) in the 2,527-noun graph by semantic similarity
- Position: each text segment gets coordinates = weighted average of its KG anchor positions in the graph's spectral embedding
- Output: a sequence of points on the manifold, one per text segment
The KG spectral embedding is already computed by /engine/topo/analyze (PageRank, clustering). The projection adds NLP anchoring to this existing infrastructure.
Content Fidelity Measurement¶
The fidelity score \(f\) between source \(D\) and recomposed \(R\):
Content is preserved if the set of knowledge graph concepts referenced by the recomposed text matches the original. This is a necessary condition; full semantic fidelity requires deeper analysis.
Performance Targets¶
| Operation | Target Latency | Complexity |
|---|---|---|
| Project | < 500ms per 1K words | O(n × |
| Decompose | < 2s per 1K words | O(n × k × FFT) |
| Recompose | < 5s per 1K words | O(n × k × LLM) |
| Identity | < 10s per 1K messages | O(N × decompose) |
| Compatibility | < 100ms | O(k²) |
Recomposition is the bottleneck because it requires natural language generation. All other operations are pure mathematical transforms.
Built by {@B_Niko, @D_Claude} · RTSG v8 · 2026-03-17