TPMN Checker is in pre-GA. Features and pricing may change before General Availability.

Try the truth filter

Paste any AI-generated text. See what the filter catches.

Free to try, right now. Need unlimited? Bring your own LLM key. We never store it.

12 free checks per day — no API key needed

Need unlimited tests? Bring your own API key

Unlock advanced models with your own key

Your key and data are never stored.

0 / 10,000 characters
GEM² Truth Filter ResultExample output
EXTRAPOLATION DETECTED
38%Truth Score
Reliability

0.38

Contract

0.42

BullishuncitedOut of scopeextrapolated

Trend Overclaim

1

S→T

Overgeneralized

1

L→G

Unsupported Leap

2

Δe→∫de

Dimension Scores

Source Attribution
0.30FAIL
Evidence Quality
0.32FAIL
Claim Grounding
0.38FAIL
Temporal Validity
0.35FAIL
Scope Accuracy
0.30FAIL
Logical Consistency
0.60WARN

Claim-Level Audit

Q4 revenue is projected to grow 34% based on current pipeline trends

S→T0.30

Pipeline snapshot treated as confirmed revenue

Customer churn has dropped to an industry-leading 2.1%

L→G0.35

Single quarter compared to unspecified industry benchmark

Our NPS of 72 confirms best-in-class customer satisfaction

Δe→∫de0.40

One metric elevated to definitive proof of satisfaction

Enterprise segment ARR reached $18M this quarter

0.75

Verifiable internal metric, but no comparison period given

AI integration will drive 50% efficiency gains across operations by 2027

Δe→∫de0.20

Speculative projection with no pilot data or methodology

Commentary

This board deck presents optimistic projections as established fact. Pipeline figures are treated as confirmed revenue, single-quarter metrics are called industry-leading without benchmarks, and future efficiency claims lack supporting pilot data.

  • Pipeline-based revenue projection presented as a confirmed growth trajectory
  • Churn rate compared to unnamed industry benchmark — no source or methodology
  • NPS score used as sole proof of customer satisfaction without context or trend data
  • AI efficiency gains projected to 2027 without pilot results or implementation plan
Sample

Example output — paste your own text above to get a live analysis.

The question isn't whether the answer is right or wrong.

Three AIs. Same report. None of them warned you.

Try it free — no install required

Runs inside Claude and ChatGPT. 300 free credits.

Runs inside your existing AI workflow. No integration project. No architecture change.

What happens when you run TPMN Checker

Four steps. Fully automated.

1

Input

Paste any AI-generated text — a report, forecast, analysis, PRD, research summary.

2

Score

Seven dimensions evaluated independently. Each scored 0.0–1.0.

3

Flag

Structural logic errors detected: S→T, L→G, Δe→∫de violations.

4

Rewrite

Auto-compose strips overclaims. Before and after, side by side.

Seven dimensions scored

DimensionWhat it catches
Source AttributionClaims with no traceable evidence
Evidence QualityThin or outdated supporting data
Claim GroundingAssertions presented as fact without basis
Temporal ValidityStale data treated as current
Scope AccuracyLocal findings overgeneralized
Logical ConsistencyInternal contradictions
Prompt AlignmentDoes the output match what was asked?

S→T

Snapshot treated as permanent trend

L→G

Local truth presented as universal

Δe→∫de

Sweeping claim from thin evidence

AI scores. You adjust. The standard improves.

Today, AI evaluates its own output. But who decides what “correct” looks like?

Not us alone. You.

When you use TPMN Checker, you see scores across seven dimensions. If you disagree with a score, that disagreement is data. Collected with your consent, aggregated across users, and analyzed for patterns — your evaluations become the ground truth.

Human at the edge, not in the loop.

Humans set the standard. AI enforces it at scale.

CollectAnalyzeCalibrateAdaptCase Law

Start verifying AI output now

Free access

  1. 1

    Connect in 30 seconds (Claude.ai, ChatGPT, or Cursor)

  2. 2

    Ask your AI tool to run gem2_truth_filter

  3. 3

    See your first reliability report

Get started free →

No credit card. No setup project. 300 free credits.

Pre-GA — lock in pricing before General Availability

TierPriceCreditsRPM
Free$03005
Starter$92,00015
Builder$199,00030
Founder$2960

One-time payment, not subscription. Pricing and credits subject to change during Pre-GA. Get early access →

Your data is never stored by default. Never shared. Never used for training. How we protect your data →