Platforms that accept image uploads without scanning expose their users to CSAM, non-consensual intimate imagery, graphic violence, and extremist content — all categories with serious legal consequences under CWE-434 (Unrestricted Upload of File with Dangerous Type) and OWASP A04:2021. Beyond legal risk, explicit imagery appearing in community feeds causes immediate user harm and platform reputation damage that is difficult to recover from. Manual review alone cannot scale: a single bad actor can upload hundreds of images in minutes, overwhelming any human queue.
Low because image scanning adds meaningful safety coverage but platforms without it may still have manual review processes that partially compensate, and the attack requires explicit attacker intent.
Add an automated image scanning step to your upload handler in src/app/api/uploads/route.ts before the image is written to storage or made publicly visible:
import { ImageAnnotatorClient } from '@google-cloud/vision';
const vision = new ImageAnnotatorClient();
export async function scanUpload(buffer) {
const [result] = await vision.safeSearchDetection({ image: { content: buffer } });
const { adult, violence } = result.safeSearchAnnotation;
if (['LIKELY', 'VERY_LIKELY'].includes(adult) || ['LIKELY', 'VERY_LIKELY'].includes(violence)) {
throw new Error('Image flagged for policy violation');
}
}
Alternatives: AWS Rekognition DetectModerationLabels or the Clarifai Moderation model. Flag images with a pending_review status and hide from public view until the scan resolves.
ID: community-moderation-safety.content-filtering.image-scanning
Severity: low
What to look for: If the platform allows image/media uploads, check if images are scanned for explicit content, nudity, or violence. Look for integration with image moderation APIs (Google Vision, AWS Rekognition, Clarifai, etc.), or client-side tools. Verify scanning happens before or immediately after upload.
Pass criteria: Images and media go through automated scanning for at least 1 category of explicit/prohibited content before being made publicly visible. Count all image upload endpoints and verify each triggers a scanning step. Flagged images are either rejected, quarantined for review, or hidden by default.
Fail criteria: No image scanning is implemented. Users can upload and display explicit or prohibited media without review.
Skip (N/A) when: Platform does not allow user image uploads.
Detail on fail: "Image uploads are stored directly without any scanning for explicit content."
Remediation: Use Google Cloud Vision or AWS Rekognition to scan images for explicit content before they are made publicly visible. Add scanning middleware to your upload route in src/api/uploads/route.ts:
import { ImageAnnotatorClient } from '@google-cloud/vision';
const visionClient = new ImageAnnotatorClient();
async function scanImage(buffer) {
const [result] = await visionClient.safeSearchDetection({ image: { content: buffer } });
const safe = result.safeSearchAnnotation;
if (safe.adult === 'LIKELY' || safe.adult === 'VERY_LIKELY') {
throw new Error('Image flagged for explicit content');
}
}