ARCHITECTURE · THE SUBSTRATE

The LLM proposes.
The BI Algebra constrains.

Spotonix is a search-and-verification system for analytical questions. It interprets your business intent precisely, constructs a verifiable plan over your Context GraphThe structured record of your warehouse, query history, and team's vocabulary — constructed by Spotonix from your business, owned by you., and compiles SQL only when every concept resolves.

Precise intent. Verifiable plans. Compiled SQL. Three guarantees the LLM cannot give you alone.

Foundation models improve. The substrate compounds.

Read time: 7 min / For: CTOs, heads of platform, technical buyers

The Bitter Lesson, applied.

“The biggest lesson that can be read from 70 years of AI research is that general methods that leverage computation are ultimately the most effective, and by a large margin.” — Richard S. Sutton, The Bitter Lesson, 2019

We agree with Sutton. That is exactly why we do not fine-tune a model. We do not pretrain on BI corpora. We do not build a domain-specific LLM. Doing any of those things is a bet against the next generation of foundation models — and the bet has always been a losing one.

Instead, Spotonix puts a verification substrate between the LLM and the answer. The LLM's general intelligence does the heavy lifting. The substrate handles the things general intelligence is structurally bad at: determinism, auditability, and grounding in your specific business.

Read another way: an effective search system needs three things.

  1. A problem formulation that exposes the right parameters to the search.
  2. A way to evaluate candidate solutions and feed the result back.
  3. A way to constrain what counts as a valid solution.

Spotonix is those three things, in service of one question class — your business's analytical questions.

What Spotonix is not.

Four adjacent categories will come up in any evaluation. We sit next to all four, structurally different from each. Naming them up front so the rest of this page doesn't have to fight assumptions you already brought in.

Not

A fine-tuned BI model.

We do not pretrain. We do not RLHF. The intelligence is the foundation model's; the integrity is ours. A fine-tuned BI model is a bet against the next generation of foundation models — we explicitly are not.

Not

A semantic layer alone.

Cube.dev, dbt Semantic Layer, AtScale define measures and dimensions and expose a query interface. They do not propose plans, verify them, or refuse ambiguity. We ingest your semantic layer as one input and add the algebra on top — a layer above, not a replacement.

Not

Text-to-SQL.

Text-to-SQL generates queries from scratch and hopes they are right. We compose answers from validated building blocks. Nothing is generated; everything is reused or refused.

Not

A chatbot interface.

Chatbots paraphrase. We compose, verify, and compile. The output is a queryable result and a Plan Card — not prose that may or may not match the underlying data.

Three components.
One substrate.

01

ANALYTICAL INTENT

Understand.

Map the question to what the business actually asked — not a literal keyword parse, not a free-text SQL guess. The system frames the intent in the algebra (Segments, Calculations, Answers) before any search begins.

This is the parameter space the LLM proposes against. Not raw SQL. A space the system can reason about, reuse, compose, and explain.

02

BI ALGEBRA CONSTRUCTION

Interpret.

Compose a plan over the algebra. The LLM proposes candidates; the system accepts only those that resolve completely against your Context Graph — every Calculation grounded, every Answer traceable.

If something is ambiguous, the system refuses. If data quality is suspect, the system flags it without being asked. The Plan Card is visible before any SQL runs. Verification is the contract, not a post-hoc audit log.

03

CONTEXT GRAPH

Compound.

Every accepted plan is persisted to your Context Graph — adding Segments, Calculations, and Answers your team has validated. The graph grows denser with use.

Future questions answer faster because the search has more to compose from. The graph lives in your VPC, adheres to your compliance posture, and leaves with you if you leave us. The substrate is yours.

One question.
Five disciplined stages.
Each one compounds the next.

01

ASK

A business question, in your team's vocabulary.

02

UNDERSTAND INTENT

What is the user actually asking? Map to Segments, Calculations, Answers.

03

CONSTRUCT ALGEBRA

LLM proposes a plan; the algebra accepts only what resolves completely.

04

COMPILE SQL

Compiled from the plan. Same plan → same SQL, by design.

05

PERSIST TO CONTEXT GRAPH

Accepted plan added to your Context Graph. The substrate compounds.

The LLM is the search heuristic. The BI Algebra is the constraint. The Context Graph is both the space and the moat — growing denser with every accepted plan. None of the three are sufficient alone; all three are what we mean when we say substrate.

IN PRACTICE · HITL

Human in the loop is reserved for new business concepts — unfamiliar Calculations, ambiguous Segments, first-time Answers. Once a concept is in your Context Graph, repeated questions over it compile without intervention.

IN PRACTICE · LATENCY

Cold start: minutes. Warm graph: seconds. Latency improves with use, not degrades — because the substrate has more to compose from with every accepted plan.

Four properties
the substrate guarantees.

Concrete invariants the architecture holds. Useful for evaluation, RFP responses, and the diligence call with your security and platform teams.

MODEL-AGNOSTIC

Tested today on OpenAI and Anthropic frontier models. Runs on your enterprise contract — no separate LLM relationship to negotiate. Swap models on Monday; the plans still close, the SQL still compiles.

COMPOUNDING

Every accepted plan adds Segments, Calculations, and Answers to the Context Graph. The graph gets denser with use; subsequent questions resolve faster because the search has more to compose from.

VERIFIABLE

Every executed query has a verified plan attached. Audit trail is a property of the architecture, not a post-hoc log. Governance does not degrade under ad-hoc demand.

PORTABLE

The Context Graph lives in your VPC and inherits your compliance posture. Export and import as Parquet today — standard, columnar, no proprietary format. If you stop using Spotonix, the asset goes with you.

Five generations of attempts.
One new substrate.

Each prior generation solved a piece of the problem and created the next one. We won't name vendors here — product positioning changes quarterly. The approaches are stable. Map your candidates onto them yourself.

01

Pre-built dashboards

Static views over governed data. The 1990s answer to "give me the numbers."

Breaks on every novel question. Ships the answer; never the reasoning.

02

Per-analyst SQL & notebooks

Skilled humans translate questions to SQL inside individual notebooks.

Knowledge walks out the door with the analyst. Doesn't scale to ad-hoc demand.

03

Generative text-to-SQL

An LLM writes SQL from scratch each time, from schema and the user's prompt.

Different SQL Monday vs Tuesday for the same question. No reuse. No memory.

04

Warehouse-bound semantic agents

AI generates SQL against a forced semantic model inside one cloud vendor's stack.

Locked to a single warehouse, often a single LLM. Sovereignty story doesn't survive procurement.

05

Conversational AI over your warehouse

A chat UI calls an LLM that calls the warehouse. The interface is the product.

No verification, no audit, no reuse. Same prompt yields different prose answers across sessions.

06

Spotonix · Context Graph + algebra

The LLM proposes plans over your Segments, Calculations, and Answers. The algebra accepts only what resolves. Accepted plans compound into your Context Graph.

Closes all five gaps. Composed (not generated), verified (not guessed), portable (Parquet), LLM-agnostic, and yours to compound.

We will happily map specific vendors onto these approaches on a call — including where they're stronger than us and where we lose.

Talk to a founder
about your architecture.

Bring a real question, a real schema, or a real worry. We will tell you whether Spotonix actually solves your problem — including when it does not.

Deployment includes a Spotonix analyst alongside your team for the first quarter. The architecture stands on its own; the concierge is how it lands.

The LLM proposes. The BI Algebra constrains.