How indexers, crawlers, and generative engines extract and surface content — SEO, GEO, structured data, sitemaps, canonicals, crawlability.
The external-discovery layer: what do indexers, crawlers, and generative engines extract from this page?
In scope. SEO (title tags, meta descriptions, canonical URLs, hreflang, internal linking), GEO / generative-engine optimization (content structured for AI crawlers and chat-driven discovery), schema.org / JSON-LD structural validity, sitemap and robots.txt correctness, crawl directives, metadata correctness (OpenGraph, Twitter Card), XML feeds, AI-crawler access policy.
Not in scope. Human-readable copy quality and accuracy — that's content-integrity. Page-load performance impacting ranking — that's performance (even though performance affects SEO, the root defect is latency). In-product information architecture (the UX "can users find features" sense) — that's user-experience.
Distinct because. Concerns what machines extract from content, not what humans read or how the page behaves. A pattern about "canonical URL points to redirect chain" is findability. A pattern about "product description contains false claim" is content-integrity.
Conceptual sub-structure. Crawler access, structured data, metadata, sitemaps / feeds, canonical / redirect hygiene, AI-crawler / GEO surface.
Note on the name. Formerly discoverability — renamed because "discoverability" in UX literature means "can users find features in an interface," which collides with the intended SEO / indexing scope. findability is the Nielsen Norman term for "can a system be located externally" and ages cleanly as AI-chat discovery joins crawler-based search.