A review form with no content filtering is an XSS injection surface (CWE-79, OWASP A03:2021 Injection): stored review text rendered without escaping can execute scripts in other users' browsers. Beyond injection risk, unfiltered review text lets competitors post identical five-star spam 100 times over — the only limit is how fast they can POST. A manual admin review queue without automated pre-filtering creates an unmanageable backlog at any real volume. At minimum, one automated layer (a profanity library or AI moderation API call) must run before the INSERT to catch the automated attacks.
High because absent content filtering allows stored XSS via review text and enables unlimited spam submission with no automated defense at any layer.
Add automated content filtering in api/reviews/submit using a profanity library, then route suspicious submissions through the manual queue at app/admin/reviews/page.tsx.
// api/reviews/submit
import Filter from 'bad-words'
const filter = new Filter()
const cleanText = filter.clean(reviewText)
const isSuspicious = cleanText !== reviewText
await db.reviews.create({
data: {
text: cleanText,
status: isSuspicious ? 'pending' : 'approved',
// ...
}
})
// app/admin/reviews/page.tsx
export default async function ReviewQueue() {
const pending = await db.reviews.findMany({ where: { status: 'pending' }, orderBy: { created_at: 'asc' } })
return <ReviewModerationTable reviews={pending} />
}
The admin route must be behind an auth check — never expose the moderation queue to unauthenticated users.
ID: ecommerce-reviews.moderation-trust.spam-detection
Severity: high
What to look for: Enumerate all content filtering mechanisms: (1) profanity filter library in package.json (e.g., bad-words, profanity-filter), (2) AI moderation API call (OpenAI moderation, Perspective API), (3) custom regex/keyword filtering, (4) admin review queue with approve/reject UI. Count the number of filtering layers present (minimum 1 required).
Pass criteria: At least 1 content filtering mechanism is implemented: an automated profanity filter library, an AI moderation API integration, or a manual admin review queue with approve/reject actions accessible at an admin route. The mechanism must execute before or during the review insert operation.
Fail criteria: No content filtering or moderation queue exists — reviews are stored and displayed without any spam or profanity check at any layer.
Skip (N/A) when: The project uses a third-party review service (Yotpo, Judge.me, Trustpilot) that handles moderation externally. Search package.json dependencies for these services.
Detail on fail: "0 content filtering mechanisms found. No profanity library in package.json, no moderation API calls in review submission, no admin review queue." or "Status column exists but no admin interface at any route to approve/reject pending reviews."
Remediation: Add automated filtering in the review submission handler at api/reviews/submit and/or create an admin queue at app/admin/reviews/page.tsx:
// Simple profanity filter
import Filter from 'bad-words'
const filter = new Filter()
// On review submission
const clean_text = filter.clean(reviewText)
const isSuspicious = clean_text !== reviewText // Content was filtered
// Store with status based on filtering
const review = await db.reviews.create({
data: {
text: clean_text,
status: isSuspicious ? 'pending' : 'approved'
}
})
Or create an admin dashboard to manually review pending reviews:
// app/admin/reviews/page.tsx
export default async function ReviewQueue() {
const pending = await db.reviews.findMany({
where: { status: 'pending' }
})
return (
<div>
{pending.map(review => (
<ReviewCard key={review.id} review={review} />
))}
</div>
)
}