Company

GEM²

Verification infrastructure for AI.

What GEM²-AI is

GEM²-AI is a verification and orchestration platform for AI outputs. It solves a specific problem: AI generates confident-sounding text that may contain unsupported claims, and there is no standard way to detect this before it reaches users.

1

A specification language

TPMN — for defining what AI should produce and what evidence standards it must meet.

2

A verification tool

TPMN Checker — audits AI outputs against those specifications.

3

An orchestration platform

GEM²-AI — coordinates multiple AI agents under formal reasoning contracts.

Product hierarchy

TPMN-PSL

Open Specification · CC-BY 4.0

Not a product. Not a SAS. Pure specification, zero executable code. Defines epistemic tags, SPT checks, three-phase protocol, and the Panini ontological layer. Anyone can implement it.

OPEN
↓ implements

GEM²-AI

SAS Ecosystem · 6 Services

Independent Sovereign AI micro-Services. Each SAS has its own repo, database, deployment, and AI ARCHITECT session. Coordination is a role, not a hierarchy.

PRODUCT

TPMN Checker

Core Brain

Knowledge Graph

Workflow Mgmt

User Mgmt

Homepage

Analogous to HTTP (spec) vs. nginx (implementation). TPMN-PSL defines the rules. GEM²-AI implements them.

What makes GEM²-AI different

Not a guardrail

Guardrails block harmful outputs after the fact. GEM² verifies the reasoning process — whether claims are supported by evidence, whether the AI is extrapolating beyond its data, whether the confidence level is justified.

Not a prompt library

TPMN is a specification language, not a collection of prompt templates. You define what the AI must prove, not how to phrase the question.

Not an agent framework

Agent frameworks coordinate tool calls. GEM²-AI coordinates reasoning contracts — each agent declares what it will produce and what evidence standards it must meet, and the system verifies compliance.

How it works

Step 1

Define a specification

What AI should produce, what evidence it needs, what constraints apply.

Step 2

AI generates output

Using Claude, OpenAI, Gemini, or any LLM.

Step 3

Checker verifies

Scores claims as grounded, inferred, extrapolated, or unknown. Flags overclaims.

Step 4

Platform orchestrates

At scale, specialized AI agents verify each other under explicit contracts.

Business model

Layer 1

TPMN Specification

Open standard. Free. Drives adoption.

Open · CC-BY 4.0

Layer 2

TPMN Checker

Freemium. 30 checks/day free, 100/day at $9/mo. Converts developers.

Shipped

Layer 3

GEM²-AI Platform

Enterprise licensing. Monetizes production deployments.

Early access

Core concepts

TPMN

Truth-Provenance Markup Notation. Open specification for structuring and auditing AI reasoning.

Learn more →

EEF

Epistemic Evidence Framework. Tags every AI claim as grounded (⊢), inferred (⊨), extrapolated (⊬), unknown (⊥), or speculative (?).

SPT

Structural Prohibition Taxonomy. Detects three categories of reasoning errors: state→trait, local→global, and thin-evidence→broad-claim.

SAS

Sovereign AI Service. A microservice exclusively owned and controlled by a dedicated AI actor. AI is the sovereign controller — not a consumer of the service. Coupled by contract, not convention.

Learn more →

Technology

Language: Go 1.24.0

AI Providers: Claude, OpenAI, Gemini

Architecture: BYO-Compute (user's API keys)

Storage: PostgreSQL + pgvector

Deployment: Fly.io (Tokyo region primary)

Protocol: MCP (Model Context Protocol)

Contact

© 2026 GEM² (gemsquared)