Flat storage namespaces are a common AI-generated blind spot: code stores files as uploads/{uuid} with no tenant prefix, so any authenticated user who discovers a file key — via URL scraping, shared links, or support screenshots — can download another tenant's files. CWE-732 (Incorrect Permission Assignment for Critical Resource) and OWASP A01 both apply. Beyond privacy, tenant-to-tenant file exposure violates SOC 2 CC6.1 data classification requirements and, when files contain PII, triggers GDPR Art. 33 breach notifications.
High because a flat storage namespace lets any authenticated user retrieve another tenant's files by constructing or guessing the storage key, with no further privilege required.
Prefix every storage key with the tenant identifier at upload time, and validate ownership before generating a signed retrieval URL:
// S3 key generation — always include tenant prefix
const key = `tenants/${session.user.organizationId}/uploads/${crypto.randomUUID()}-${filename}`
// Retrieval — verify ownership before signing
const fileRecord = await db.files.findFirst({
where: { id: fileId, organizationId: session.user.organizationId }
})
if (!fileRecord) return new Response('Not found', { status: 404 })
const signedUrl = await s3.getSignedUrl('getObject', { Bucket, Key: fileRecord.s3Key, Expires: 300 })
For Supabase Storage, write a bucket RLS policy that checks auth.jwt()->>'org_id' = storage.foldername(name)[1] so the storage engine itself enforces isolation without relying on application code.
ID: saas-multi-tenancy.data-isolation.file-storage-segregated
Severity: high
What to look for: Examine all file upload and storage integration code. Look for S3 key generation, Supabase Storage bucket policies, GCS object naming, Cloudinary folder configuration, UploadThing file routing. Check whether uploaded files are stored under a path that includes the tenant identifier (e.g., uploads/{tenantId}/{fileId} or org-{orgId}/documents/{filename}). Check whether at least 1 storage bucket policy or RLS rules restrict access to tenant-owned files.
Pass criteria: Enumerate all file upload handlers and confirm at least 100% store files under a path or prefix that includes the tenant identifier. File retrieval endpoints validate tenant ownership before returning signed URLs or file contents. If using Supabase Storage, bucket RLS policies restrict reads and writes to the file owner's tenant.
Fail criteria: Files are stored in a flat namespace without tenant-scoped paths (e.g., uploads/{fileId} with no tenant prefix). File retrieval does not validate that the requesting tenant owns the file. A user from Tenant A can construct a URL to retrieve a file uploaded by Tenant B.
Skip (N/A) when: No file storage integration is detected. Signal: no S3/GCS/R2 SDK, no Supabase Storage usage, no Cloudinary/UploadThing/uploadcare, no file upload endpoint patterns in the codebase.
Detail on fail: Describe the storage pattern and what's missing. Example: "S3 keys generated as uploads/{uuid} with no tenant prefix in src/lib/upload.ts. No ownership check found on GET /api/files/:fileId endpoint."
Remediation: Structure storage keys so the tenant is always part of the path:
// S3 key generation
const key = `tenants/${session.user.organizationId}/uploads/${crypto.randomUUID()}-${filename}`
// For retrieval — verify ownership before generating signed URL
const fileRecord = await db.files.findFirst({
where: { id: fileId, organizationId: session.user.organizationId }
})
if (!fileRecord) return new Response('Not found', { status: 404 })
const signedUrl = await s3.getSignedUrl('getObject', { Bucket, Key: fileRecord.s3Key, Expires: 300 })
For Supabase Storage, write RLS policies on the bucket that restrict access to auth.jwt()->>'org_id' = storage.foldername(name)[1] or an equivalent tenant path check.