How to Run Your First AuditBuffet Audit
How to Run Your First AuditBuffet Audit
You've got a vibe-coded app. It works. Users can click things. But you have that nagging feeling that your AI coding tool skipped some important stuff. Here's how to find out in about 15 minutes.
Step 1: Pick a starting audit
Head to auditbuffet.com and browse the audit library. If you're not sure where to start, go with Security Headers & Basics — it's free, it's fast, and it almost always finds something.
Six audits are completely free, no account needed:
- Stack Scan — detects your tech stack
- SEO Fundamentals — meta tags, sitemaps, structured data
- Security Headers & Basics — CSP, HSTS, and 17 more checks
- Accessibility Fundamentals — ARIA, keyboard nav, heading structure
- Performance & Load Readiness — Core Web Vitals, bundle size
- Mobile Responsiveness — viewport, touch targets, responsive layouts
For your first run, pick one. Don't try to audit everything at once.
Step 2: Copy the audit prompt
Each audit page has a prompt you copy to your clipboard. That's it — no SDK, no npm package, no configuration file. The prompt contains everything: the checks to run, the output format, and the scoring rules.
Step 3: Run it in your AI coding tool
Paste the prompt into whatever you're already using — Claude Code, Cursor, Bolt, Windsurf, or any AI tool that can read your codebase. The audit needs access to your project files, so run it from the project root.
A few tips:
- Don't interrupt it. Let the audit run to completion. Partial results aren't useful.
- Save the raw output. The audit produces both a human-readable report and a JSON telemetry block. You'll want both.
- Context window matters. Larger projects may need a tool with a bigger context window. If the audit seems to skip files, that's usually why.
Step 4: Read the report
The audit output has two parts. The human-readable report is what you'll actually read — it lists every check, whether it passed or failed, and specific remediation steps for failures.
The interesting part is the scoring. Each check has a severity weight:
| Severity | Weight | Examples | |----------|--------|----------| | Critical | 10 | Missing HTTPS, no auth on admin routes, SQL injection | | Warning | 3 | Missing CSP header, no error boundary, poor color contrast | | Info | 1 | Missing meta description, no favicon, verbose console logs |
Your category score is the sum of passing check weights divided by total applicable check weights, times 100. Checks that were skipped or errored out don't count against you — they're excluded from both numerator and denominator.
The grade scale:
| Grade | Score Range | |-------|------------| | A | 90–100 | | B | 75–89 | | C | 60–74 | | D | 40–59 | | F | 0–39 |
Most AI-generated apps score between C and D on their first security audit. That's not a failing on your part — it's a failing of the tools. The point is to know where you stand.
Step 5: Fix or submit first — either works
You have two paths:
Fix first, then submit. Read the failures, fix the critical ones, re-run the audit, and submit your improved score. This gives you a cleaner baseline.
Submit first, then fix. Submit your raw score immediately, then fix issues and re-run. This gives you a visible improvement trajectory on your dashboard.
Either approach works. The benchmarking system tracks your most recent score per audit, so submitting a low initial score doesn't hurt you — it just shows progress when you improve.
Step 6: Submit for benchmarking
The JSON telemetry block at the end of your audit output is what you submit to AuditBuffet. Go to auditbuffet.com/submit, paste the JSON, and you're done.
What you get back:
- Percentile ranking — how your app compares to others with similar tech stacks. If you scored 72 on Security Headers and that puts you at the 85th percentile, most apps in your segment are doing worse.
- Trend tracking — run the same audit monthly and watch your scores move. The dashboard shows your trajectory over time.
- Cross-audit coverage — once you've run multiple audits, the dashboard shows which categories you've covered and where your gaps are.
Anonymous submission is supported — no account required for a one-off check. Create a free account if you want persistent tracking.
What to run next
After your first audit, here's a good sequence:
- Security Headers & Basics → then Authentication & Session Security (both in the Security pack)
- SEO Fundamentals → gives you quick wins for discoverability
- Accessibility Fundamentals → then WCAG Compliance if you need full coverage
- Performance & Load Readiness → especially if you're seeing slow load times
Or grab a pack. The Pre-Launch pack bundles the audits most relevant to shipping, and the SaaS Essentials pack covers the full stack for subscription apps.
The whole point is that auditing should be a normal part of building with AI tools — not an afterthought. Run early, run often, and let the scores tell you what your AI missed.