Why This Matters Now
AI-referred website sessions grew 527% year-over-year in 2025. Traditional search volume is projected to drop 25% by 2026 and 50% by 2028. The question isn’t whether AI answer engines will replace search — it’s whether your business is structured to be cited when they do.
The CiteLayer Index doesn’t measure SEO. It measures whether AI systems can find, understand, extract, compare, and recommend your business when someone asks a relevant question. These are different problems with different solutions.
website sessions (2025)
non-Google-page-one sources
pages with structured data
How AI Selects Sources
Every major AI answer engine uses Retrieval-Augmented Generation (RAG) — a two-stage process where the system first retrieves relevant documents from an index, then synthesizes an answer citing the sources it relied on. The CiteLayer Index measures readiness at each stage of this pipeline.
Five Dimensions, Five Pipeline Stages
Findability
Maps to the RAG ingestion stage. If GPTBot, ClaudeBot, or PerplexityBot is blocked in robots.txt — or if your content requires JavaScript to render — nothing downstream matters. AI crawlers do not execute JavaScript. Content must be visible in raw HTML.
We check: crawler permissions (robots.txt), redirect chains, sitemap presence, server-side rendering, and content accessibility without JavaScript execution.
Describability
Maps to the knowledge graph construction stage. AI systems build internal entity representations from structured data — Organization schema, LocalBusiness schema, FAQPage markup, product/service attributes. Without this, AI guesses what you offer — or cites a competitor who made it explicit.
We check: JSON-LD schema markup, entity consistency, business attribute completeness (hours, location, services, reviews), and whether schema is server-side rendered.
Summarizability
Maps to the RAG chunking and extraction stage. AI systems break pages into segments and evaluate each for relevance. Content structured as modular, self-contained sections (200-500 words each) with clear headings and direct answers in the first paragraph scores highest.
We check: first-paragraph answer density, heading structure (questions vs. vague labels), content modularity, FAQ presence, and overall extractability.
Comparability
Maps to the retrieval and ranking stage. When a user asks “best X in Y,” AI must compare entities across consistent attributes. Businesses with structured, explicit differentiation data — pricing, specialties, service areas, unique value — win the comparison. Those without it lose by default.
We check: brand entity consistency across platforms, sameAs profile links, category identification, contact detail consistency, and cross-platform presence.
Recommendability
Maps to the citation and attribution stage. AI systems need verifiable trust signals — third-party mentions, review data, content depth, freshness signals — to justify recommending a specific entity. Without these, AI defaults to safer, more documented alternatives.
We check: third-party credibility signals, review data accessibility, content depth and freshness, competitive positioning signals, and citation history across AI platforms.
Scoring Methodology
Each dimension is scored 0-10 based on automated checks that evaluate structural signals. The five dimension scores sum to the CiteLayer AI Score (0-50). Letter grades translate the composite score into an at-a-glance assessment.
| Grade | Score Range | What It Means |
|---|---|---|
| A+ / A / A- | 42-50 | AI systems can reliably find, describe, extract, compare, and recommend your business. |
| B+ / B / B- | 33-41 | Most structural signals are in place. Targeted improvements will close remaining gaps. |
| C+ / C / C- | 24-32 | Partial visibility. AI can find you but lacks enough data to consistently recommend you. |
| D+ / D / D- | 15-23 | Significant structural gaps. AI defaults to competitors with better-structured data. |
| F | 0-14 | Structurally invisible to AI answer engines regardless of traditional SEO performance. |
What We Don’t Measure (And Why)
The CiteLayer Index deliberately excludes traditional SEO metrics like keyword rankings, backlink profiles, and domain authority. These are valuable for Google Search — but AI answer engines use a fundamentally different selection process. A page can rank #1 on Google and be completely invisible to ChatGPT if it lacks structured data, isn’t crawlable by AI bots, or can’t be cleanly extracted into a citable passage.
We also don’t claim to predict exact AI responses. Rand Fishkin’s research showed fewer than 1 in 100 identical prompts produced the same brand list from AI systems. What we measure is structural readiness — whether your content has the signals that make citation possible and probable, not guaranteed.
Research Foundation
The CiteLayer Index measures structural signals correlated with AI citation in published research. It does not guarantee citation by any specific AI platform. AI system responses vary by query, region, index state, and model version. The methodology is updated as new research emerges. Signal assessment is point-in-time as of scan date.
CiteLayer AI does not claim authorship of the underlying research. We operationalize published findings into a diagnostic framework. All research sources are cited and linked above.