# Verified Inference

Verified inference is evidence that a specific model ran on a specific prompt, produced a specific output, and did so at a specific time.

For enterprise AI, verified inference is the difference between trusting a vendor's black-box API and having an execution record that legal, compliance, security, engineering, and procurement teams can inspect.

## Why Ordinary AI APIs Are Not Enough

Ordinary AI providers can silently alter the serving stack. They can change model weights, quantization, routing, system prompts, context compression, safety scaffolding, caching, and rate limits. A model name can stay stable while behavior changes underneath.

For production teams, this creates several problems:

- regression tests become harder to interpret,
- compliance evidence is incomplete,
- audits depend on vendor assertions,
- agent failures are harder to debug,
- procurement has limited proof of service quality,
- high-value workflows inherit hidden provider risk.

## Ambient's Verification Layer

Ambient's Proof of Logits uses model logits as execution fingerprints. Logits are the raw values a language model produces before selecting tokens. These values are specific to a model, prompt, context, and generation path. Ambient uses hashes and validator checks around those values to verify model execution.

The result is a cryptographic receipt that can support audits, disputes, compliance reviews, model evaluations, and agent accountability.

## What Verification Does Not Claim

Verified inference does not guarantee that an answer is true, safe, or optimal. It proves which computation produced the answer. Enterprises still need retrieval, evaluation, policy controls, tool checks, and human review for high-stakes workflows. Verification is the substrate that makes those controls meaningful.
