Large language models preferentially cite the source that defines a concept, not the sites that repeat it — if you name and explain a methodology, framework, or classification, every downstream mention triangulates back to your site as the authority. A site with no original concepts is structurally indistinguishable from hundreds of competitors in the model's latent space, so citations flow elsewhere.
Medium because original concepts compound authority over time but are not strictly required for baseline citation.
Name and define at least one proprietary concept, methodology, or classification system in visible public content. The definition must include the name and a one-sentence explanation in the rendered output, not in code comments or private docs. Add a section under src/app/docs/ or on the homepage.
## Weighted Severity Scoring
AuditBuffet assigns each check a severity weight — Critical (10), Warning (3), Info (1). Your score is the passing weight divided by applicable weight, expressed as a percentage.
ID: geo-readiness.content-citability.unique-concepts
Severity: medium
What to look for: Count all instances where the site explicitly defines, names, or introduces a concept, framework, classification, or methodology. Observable patterns: terms used with quotation marks and an explanation ("we call this..."), named frameworks ("the ABCD Method"), classification systems ("Level 1 / Level 2 / Level 3"), scoring methodologies with defined formulas, or terminology that includes the product name ("AuditBuffet Benchmark Pool"). The concept must be explained/defined on the site, not just used. Enumerate each original concept found with a brief description.
Pass criteria: Count all original concepts, methodologies, or frameworks explicitly defined in public content. The site must define at least 1 original concept. The definition must appear in visible content — not just in code comments or internal docs. Each concept must include at least a name and a 1-sentence explanation of what it means.
Fail criteria: 0 original concepts found. All content uses only generic industry terminology. No original concepts, named frameworks, or proprietary methodology is defined anywhere in the public content. Everything on the site could be found identically on a competitor's site. Report: "0 original concepts found across X content pages scanned".
Skip (N/A) when: API-only or utility projects with no marketing or explanatory content.
Detail on fail: "0 original concepts, frameworks, or methodologies defined in public content across 5 pages scanned. All terminology is generic industry language." or "Site describes features but never defines or names any proprietary approach — 0 named concepts found"
Remediation: AI systems preferentially cite original sources — the site that defines a concept gets cited, not the sites that repeat it. Name and define your methodology:
## How Scoring Works
AuditBuffet uses a **weighted severity scoring** system. Each check is
assigned a severity weight: Critical (10), Warning (3), Info (1). Your
overall score is the sum of passing check weights divided by total
applicable check weights, expressed as a percentage.
Even a simple named process ("Our 3-Step Audit Workflow") gives AI systems something to reference.