Frequently asked questions
Full list of FAQs. See also pricing specifications.
How is this different from model evaluation tools?
Evaluation tools tell you which model performed better. AfterAI tells you whether a change should be approved, what the trade-offs are, and who approved it — and preserves that decision over time.
Is AfterAI observability?
No. AfterAI is not request-level observability, tracing, or logging. It operates at the change level, not the inference level.
Does AfterAI sit in the inference path?
No. AfterAI is completely out-of-band. It does not proxy traffic, route requests, or block inference calls. Telemetry is asynchronous and designed to fail open.
Do you need to send prompts or model outputs?
No. AfterAI is metadata-first by default. Prompt and output capture is optional, sampled, and fully controllable with redaction and retention policies.
Why go with AfterAI instead of DIY?
Building change intelligence in-house means maintaining eval pipelines, escalation logic, and audit trails yourself. AfterAI gives you a canonical flow (AIS → ACE → AURA → PACR), consistent limits and billing, and a defensible decision trail without owning the full stack. You get decision-grade evidence and optional PACRs when you need them, without building observability or request-level telemetry.
What is AfterAI not?
AfterAI is not inference-path instrumentation — we never sit in front of your inference. It is not hot-path traffic logging; evaluations are controlled and offline only. It is not request-level observability or telemetry. It is not prompt tuning, routing, or automatic model switching. It is not a compliance tool that shows up after decisions are already made. AfterAI uses controlled, offline evaluations only and exists at the decision moment — when a change is proposed (or drift is detected and you choose not to act) and someone has to say yes or no.
Can AfterAI automatically block or roll back changes?
No. AfterAI never takes action automatically. It produces evidence and decision options — humans remain accountable.
Is this a compliance or security product?
No. Governance is an output, not an entry requirement. Teams adopt AfterAI to move faster, not to satisfy compliance checklists — but the artifacts it produces do hold up in audits.
Why not build this internally?
Most teams do — until the first forced migration, incident, or audit. AfterAI standardizes how evidence is generated, compared, and preserved, so every approval isn't a bespoke process.
Can't we do this with docs and dashboards?
Docs and dashboards capture outputs. AfterAI captures decisions: scope, evidence, trade-offs, confidence, and approvals — in a repeatable format.
Does AfterAI tune prompts or optimize models?
No. AfterAI evaluates changes; it does not suggest or apply optimizations.
Does AfterAI route between models or providers?
No. It is provider-neutral and intentionally avoids becoming part of the execution layer.
What kinds of changes does AfterAI cover?
Model upgrades, prompt edits, configuration changes, safety policy updates, and forced migrations — anything where risk and accountability matter.
How hard is it to get started?
Most teams start metadata-only with minimal integration. You can add deeper evaluation or content capture later as needed.