The contract between a caller and an AI / model service — prompt structure, output validation, tool-call contracts, RAG correctness, and system-prompt protection.
The AI-consumption interface layer: is the contract between caller and model honored?
In scope. Prompt-injection hardening (untrusted input entering the prompt), system-prompt protection, output validation (schema conformance, hallucination guards, refusal detection), context-window management, RAG pipeline correctness (retrieval relevance, grounding, citation fidelity), tool / function-call contract correctness, model-version pinning, fallback behavior on provider outage, context-scoping across users and sessions, agent memory scoping.
Not in scope. Generic external-API consumption — error-resilience plus cost-efficiency. LLM cost runaway where bounding alone is the defect — cost-efficiency primary. Defects in application code authored by AI tools — those are named for what the defect IS (reference-integrity, dependency-coherence, placeholder-hygiene, code-quality), not by the authorship.
Distinct because. The defect is the contract between caller and non-deterministic model is violated. Prompt-injection carries both this and injection-and-input-trust (the model is the sink). Token-cap missing carries both this and cost-efficiency (the primary taxon shifts by which defect dominates).
Conceptual sub-structure. Prompt-injection surface, output validation, RAG correctness, tool-call contracts, context management, model-drift handling, agent memory scoping.
Note on the name. Formerly ai-integration — renamed because "AI integration" is a surface label, not a defect class, and the prefix "AI" ages into tautology as AI becomes pervasive. inference-contract names the specific defect surface: the contract between caller and model.