GDPR Article 32 requires that personal data be transmitted with appropriate technical security measures — which universally means TLS encryption. CWE-319 (Cleartext Transmission of Sensitive Information) and OWASP A02 (Cryptographic Failures) both apply when AI prompt data travels over unencrypted HTTP. AI prompts contain user input in its rawest form: questions, documents, instructions, and frequently PII the user embedded without realizing it. Any network observer on the path between your server and the AI provider endpoint — a misconfigured proxy, a shared hosting environment, a compromised network hop — can read every prompt in plaintext. NIST SP 800-53 Rev. 5 SC-8 mandates transmission confidentiality as a baseline control.
Critical because HTTP-transmitted AI prompts expose user content to any network observer between server and provider, constituting cleartext PII transmission in violation of GDPR Art. 32, CWE-319, and OWASP A02.
Ensure any custom baseURL configured in your AI client starts with https://. Official SDK default endpoints are already HTTPS — only custom or self-hosted endpoints need verification.
// Correct:
const openai = new OpenAI({
baseURL: 'https://your-proxy.example.com/v1',
apiKey: process.env.OPENAI_API_KEY
})
// Wrong — never in production:
const openai = new OpenAI({
baseURL: 'http://your-proxy.example.com/v1',
})
For local development mocks (Ollama, LM Studio), guard the HTTP URL behind a NODE_ENV check so it cannot accidentally reach production:
const openai = new OpenAI({
baseURL: process.env.NODE_ENV === 'development'
? 'http://localhost:11434/v1'
: 'https://api.openai.com/v1'
})
Verify by searching the codebase for http:// (not https://) in any string adjacent to AI client configuration.
ID: ai-data-privacy.third-party-ai-provider.api-transport-encrypted
Severity: critical
What to look for: Enumerate every relevant item. Examine AI provider client configuration. Look at where the AI client is initialized and whether any custom baseURL is configured. Search for baseURL, baseUrl, apiBase, or similar configuration options in the AI client constructor or configuration object. If a custom endpoint URL is provided (for custom deployments, proxies, or self-hosted models), check whether it starts with https://. Also check .env.example or config files for any endpoint URL constants. The default endpoints of official SDKs (OpenAI, Anthropic, Google AI) use HTTPS by default — only flag if a custom endpoint overrides this.
Pass criteria: At least 1 of the following conditions is met. No custom base URL is configured (relying on the SDK's default HTTPS endpoints), OR any custom baseURL found starts with https://. Local development URLs (http://localhost) are acceptable as development-only overrides.
Fail criteria: A custom baseURL or endpoint is configured with http:// (non-TLS) pointing to a non-localhost address — indicating production traffic to the AI provider is unencrypted.
Skip (N/A) when: No AI provider API calls are found in the codebase.
Detail on fail: "Custom AI provider endpoint configured with http:// in [file] — API calls including user prompt data are sent unencrypted over the network"
Remediation: Any data sent over HTTP is visible to anyone on the network path between your server and the AI provider. Since AI prompts regularly contain user input, this exposes user data directly.
If you are using a proxy or custom deployment:
// Ensure HTTPS:
const openai = new OpenAI({
baseURL: 'https://your-proxy.example.com/v1', // not http://
apiKey: process.env.OPENAI_API_KEY
})
For local development mocks, guard the HTTP URL behind a NODE_ENV check:
const openai = new OpenAI({
baseURL: process.env.NODE_ENV === 'development'
? 'http://localhost:11434/v1' // local Ollama mock
: 'https://api.openai.com/v1'
})
For a broader review of transport security configuration including HSTS and TLS settings, the Security Headers & Basics Audit covers this in depth.