TPMN Checker is in pre-GA. Features and pricing may change before General Availability.

Company

GEM²

Verification infrastructure for AI.

What GEM²-AI is

GEM²-AI is a verification and orchestration platform for AI outputs. It solves a specific problem: AI generates confident-sounding text that may contain unsupported claims, and there is no standard way to detect this before it reaches users.

1

A specification language

TPMN — for defining what AI should produce and what evidence standards it must meet.

2

A verification tool

TPMN Checker — audits AI outputs against those specifications.

3

An orchestration platform

GEM²-AI — coordinates multiple AI agents under formal reasoning contracts.

Product hierarchy

TPMN Skill Standard

Workflow Governance · Open Source

12 lifecycle skills that replace prose with algebraic contracts. Plan, execute, verify, archive — every session inherits what the last session proved. The Proof Cycle compounds knowledge, not skills.

MIT
Learn more →

TPMN Checker

Epistemic Verification · Shipped

Audits AI outputs for unsupported claims, scores epistemic quality, and composes grounded replacements. Works inside Claude, ChatGPT, and any MCP-compatible IDE.

PRODUCT
Try it →
↓ both built on

TPMN-PSL

Open Specification · CC-BY 4.0

The algebraic foundation. Defines epistemic tags, SPT checks, three-phase protocol, and the Panini ontological layer. Anyone can implement it.

SPEC
↓ powers

GEM²-AI Platform

SAS Ecosystem · Orchestrates Both

Independent Sovereign AI micro-Services. Each SAS has its own repo, database, deployment, and AI ARCHITECT session.

PLATFORM

TPMN Checker

Core Brain

Knowledge Graph

Workflow Mgmt

User Mgmt

Homepage

One algebraic foundation (TPMN-PSL). Two systems: Skill Standard (workflow) + Checker (verification). GEM²-AI orchestrates both.

What makes GEM²-AI different

Not a guardrail

Guardrails block harmful outputs after the fact. GEM² verifies the reasoning process — whether claims are supported by evidence, whether the AI is extrapolating beyond its data, whether the confidence level is justified.

Not a prompt library

TPMN is a specification language, not a collection of prompt templates. You define what the AI must prove, not how to phrase the question.

Not an agent framework

Agent frameworks coordinate tool calls. GEM²-AI coordinates reasoning contracts — each agent declares what it will produce and what evidence standards it must meet, and the system verifies compliance.

How it works

Step 1

Define a specification

What AI should produce, what evidence it needs, what constraints apply.

Step 2

AI generates output

Using Claude, OpenAI, Gemini, or any LLM.

Step 3

Checker verifies

Scores claims as grounded, inferred, extrapolated, or unknown. Flags overclaims.

Step 4

Platform orchestrates

At scale, specialized AI agents verify each other under explicit contracts.

Business model

Layer 1

TPMN Specification

Open standard. Free. Drives adoption.

Open · CC-BY 4.0

Layer 2

TPMN Checker

Freemium. 300 free credits, then one-time Pre-GA tiers from $9. Converts developers.

Shipped

Layer 3

GEM²-AI Platform

Enterprise licensing. Monetizes production deployments.

Early access

Core concepts

TPMN

Truth-Provenance Markup Notation. Open specification for structuring and auditing AI reasoning.

Learn more →

EEF

Epistemic Evidence Framework. Tags every AI claim as grounded (⊢), inferred (⊨), extrapolated (⊬), unknown (⊥), or speculative (?).

SPT

Structural Prohibition Taxonomy. Detects three categories of reasoning errors: state→trait, local→global, and thin-evidence→broad-claim.

SAS

Sovereign AI Service. A microservice exclusively owned and controlled by a dedicated AI actor. AI is the sovereign controller — not a consumer of the service. Coupled by contract, not convention.

Learn more →

Technology

Language: Go 1.24.0

AI Providers: Claude, OpenAI, Gemini

Architecture: BYO-Compute (user's API keys)

Protocol: MCP (Model Context Protocol)

Contact

© 2026 GEM² (gemsquared)