If your content requires JavaScript to appear in the browser, most AI systems cannot see it. ChatGPT, Claude, and Perplexity fetch raw HTML without executing scripts, leaving client-rendered pages invisible. This article covers how the rendering gap between search engines and AI crawlers works, how to diagnose it, and how to fix it.
The rendering gap between search and AI
Google invested over a decade building infrastructure to render JavaScript (see Google's rendering pipeline). Googlebot's Web Rendering Service executes modern JavaScript frameworks, processes API responses, and indexes content that only exists after client-side hydration (where JavaScript activates interactive elements on a server-delivered page).
AI crawlers have not made this investment. ChatGPT, Claude, Perplexity, and most other AI systems fetch raw HTML and stop there—no JavaScript execution occurs.
This creates a fundamental visibility problem: a site can rank well in Google Search while being completely invisible to the AI systems increasingly used to answer questions directly.
What AI crawlers actually see
When an AI crawler requests a page, it receives the initial HTML response: the same HTML you see when you "View Page Source" in a browser. Unlike Googlebot, these crawlers do not:
- Execute JavaScript files
- Wait for API responses
- Process client-side rendering
- Trigger React, Vue, or Angular hydration
A client-side rendered React application serves HTML like this:
<!DOCTYPE html>
<html>
<head>
<title>My App</title>
</head>
<body>
<div id="root"></div>
<script src="/bundle.js"></script>
</body>
</html>
To a browser, this becomes a fully rendered page after JavaScript execution. To an AI crawler, this is the entire content: an empty div and a script reference. The product descriptions, articles, and features your users see simply don't exist in the AI's view of the web.
Content beyond the initial HTML
Beyond full client-side rendering, other patterns create AI visibility gaps even when content technically exists in the page.
Tabs and accordions
Content in collapsed accordion panels or inactive tabs may be present in the initial HTML but styled as hidden. While Googlebot can access this content (it's in the DOM, just not displayed), the semantic relationship between the content and its context is lost on AI systems parsing raw HTML.
More problematic are implementations where tab content loads via JavaScript only when clicked. These patterns leave AI crawlers with incomplete content regardless of rendering capability.
Infinite scroll and lazy loading
AI crawlers don't scroll. Content that loads on scroll events remains unfetched. Standard lazy loading for images using native browser features works fine: the image URLs are in the HTML. But content that requires scroll position triggers or Intersection Observer callbacks (a browser API that detects when elements enter the viewport) to load will be missing.
Dynamic filters and personalisation
Product listings that depend on JavaScript to filter, sort, or display results based on user preferences present empty or default states to AI crawlers. If your product pages show different content based on user selections without updating the URL, the dynamic content is invisible.
Crawler rendering capabilities
Not all crawlers handle JavaScript the same way. Understanding which systems render content helps prioritise where to focus remediation.
| Crawler | Operator | JavaScript rendering |
|---|---|---|
| Googlebot | Full rendering via WRS | |
| GPTBot | OpenAI | No rendering |
| ChatGPT-User | OpenAI | No rendering |
| OAI-SearchBot | OpenAI | No rendering |
| ClaudeBot | Anthropic | No rendering |
| PerplexityBot | Perplexity | No rendering |
| Meta-ExternalAgent | Meta | No rendering |
| Bytespider | ByteDance | No rendering |
| CCBot | Common Crawl | No rendering |
| AppleBot | Apple | Full rendering (browser-based) |
| Google-Extended | Full rendering (uses Googlebot infrastructure) |
Google's AI training crawler (Google-Extended) and Apple's crawler both render JavaScript because they use existing browser-based infrastructure. The newer AI companies (OpenAI, Anthropic, Perplexity) have built crawlers optimised for speed and scale, not rendering fidelity.
This means content accessible to Google Search and AI Overviews may still be invisible to ChatGPT, Claude, and Perplexity. These are separate visibility problems requiring distinct solutions.
Symptoms of invisible content
Several observable patterns indicate your content isn't reaching AI systems:
Missing or generic citations
When AI systems can't read your content, they can't cite it meaningfully. If your pages appear in "reviewed" or "more sources" sections rather than primary citations, the system may have crawled the URL but found nothing to extract.
Favicon failures
AI systems display favicons by parsing HTML for icon references. Client-side rendered pages often inject favicons via JavaScript. When the HTML contains no favicon link, AI interfaces fall back to generic placeholder icons: a visible signal that the page wasn't fully processed.
Content retrieval failures
Testing directly reveals the problem. When asked to summarise a specific URL, AI systems often report they cannot find content, explicitly stating the page loads dynamically or that no readable content was returned. ChatGPT in particular will sometimes identify JavaScript rendering as the cause of the failure.
This confirms the rendering limitation: the system found the URL, fetched the HTML, and found nothing usable.
Thin or irrelevant answers about your content
When AI systems answer questions about your product or content, they may provide generic responses or information from secondary sources rather than your authoritative pages. If competitors with server-rendered content get cited while your client-rendered pages don't, rendering is likely the cause.
How to test your site
Method 1: Disable JavaScript
The simplest test uses your browser's developer tools:
- Open Chrome DevTools (F12 or right-click → Inspect)
- Press Cmd/Ctrl+Shift+P to open the command palette
- Type "Disable JavaScript" and select the option
- Reload the page
What remains visible is what AI crawlers see. If your page is blank or missing critical content, you have a rendering problem.
Method 2: View source vs. rendered DOM
Compare what's in the HTML source versus what appears after JavaScript execution:
- View Page Source (Cmd/Ctrl+U): Shows raw HTML the server delivers
- Inspect Element: Shows the DOM (the live document structure the browser builds after processing HTML and JavaScript) after JavaScript has modified it
If critical content only appears in the rendered DOM, it's invisible to AI crawlers.
Method 3: Ask the AI directly
Query ChatGPT, Claude, or Perplexity about specific pages:
"Read the content at [URL] and summarise the first two paragraphs."
If the system returns an error or generic response rather than your actual content, rendering is blocking access.
Method 4: Compare against a control
Test a page you know uses server-side rendering (such as a blog post or documentation page from a major platform) alongside your own pages. The contrast in response quality reveals whether the issue is rendering-specific or a broader access problem.
Why Google works but AI doesn't
Site owners sometimes assume that because Google indexes their JavaScript-rendered content, other crawlers will too. This assumption is wrong for several reasons:
Infrastructure investment
Google has spent years building and scaling a rendering service. The Web Rendering Service runs headless Chromium instances across massive infrastructure. AI companies have prioritised model training and inference infrastructure, not web rendering.
Economic incentives
For Google, rendering JavaScript is essential to index the modern web accurately. For AI crawlers, the cost-benefit calculation differs: many useful training signals exist in raw HTML, and rendering every page dramatically increases crawl costs.
Speed vs. fidelity trade-off
AI crawlers prioritise speed and volume. Vercel's data shows ChatGPT and Claude operate from only 1-2 U.S. data centre locations, compared to Googlebot's seven distributed locations. Rendering adds latency that conflicts with high-volume crawling goals.
Crawl efficiency patterns
AI crawlers show less optimised behaviour than Googlebot. Vercel's research indicates ChatGPT and Claude spend over 34% of fetches hitting 404 pages, compared to Googlebot's 8%. This suggests crawl infrastructure is still maturing.
Solutions for AI visibility
The fix is straightforward in principle: ensure critical content exists in the initial HTML response. Implementation varies by technical architecture.
Server-Side Rendering (SSR)
SSR executes JavaScript on the server and delivers complete HTML to crawlers. The page is interactive after hydration, but the content exists before any client-side JavaScript runs.
For React applications, frameworks like Next.js provide SSR with minimal configuration changes:
// Next.js App Router — server component fetched on every request
export default async function Page({ params }) {
const data = await fetchContent(params.slug);
return <Article data={data} />;
}
This ensures every page request receives fully rendered HTML, whether from a browser, Googlebot, or an AI crawler.
Static Site Generation (SSG)
For content that doesn't change frequently, pre-rendering at build time produces static HTML files. AI crawlers receive the same content as users, with no rendering required:
// Next.js App Router — generated at build time (default behaviour)
export default async function Page() {
const data = await fetchContent();
return <Article data={data} />;
}
SSG is ideal for marketing pages, documentation, blog posts, and product catalogues with stable content.
Incremental Static Regeneration (ISR)
ISR combines static generation with on-demand updates. Pages are pre-rendered but regenerate when content changes, balancing freshness with AI accessibility:
// Regenerate this page every hour
export const revalidate = 3600;
export default async function Page() {
const data = await fetchContent();
return <Article data={data} />;
}
Progressive enhancement for interactive elements
When JavaScript enhancement is necessary, structure content so the base experience works without it:
<!-- Accordion that works for AI crawlers -->
<details>
<summary>Product specifications</summary>
<p>Full specifications content here, visible in raw HTML...</p>
</details>
The <details> element provides native expand/collapse behaviour without JavaScript. Content is always present in the HTML, just visually collapsed by default.
For more comprehensive guidance on rendering strategies and their trade-offs, see JavaScript SEO: Rendering Strategies for Search Visibility.
Prioritising what to fix
Not every page needs AI visibility. Focus remediation on content where AI citations drive value:
High priority:
- Product pages and documentation
- Key informational content that answers common queries
- Brand-defining pages (About, services, capabilities)
- Content targeting queries users ask AI systems
Lower priority:
- Interactive tools and calculators
- Authenticated dashboards
- Highly personalised experiences
- Content you've chosen to exclude from AI training
For content you want AI systems to access, ensure it's server-rendered. For content where AI access isn't valuable, client-side rendering may be acceptable, but understand the trade-off.
The broader visibility stack
JavaScript rendering is one layer of AI accessibility. Content that renders correctly still needs to be:
- Crawlable: Not blocked by robots.txt for relevant AI crawlers
- Accessible: Not behind authentication or paywalls
- Parsable: Structured clearly enough for content extraction
- Current: Fresh enough to be in the AI system's index
A page can pass all rendering tests and still fail to appear in AI responses due to crawler access restrictions or content structure issues. See AI Crawlers and Access Control for the full access-control stack.
FAQs
If I use Next.js, am I automatically fine?
Not necessarily. Next.js supports SSR, SSG, and ISR, but it also supports client-side rendering. The framework provides options; your implementation determines what crawlers see. Pages using useEffect to fetch data on the client will be empty to AI crawlers, even in a Next.js application.
Will AI crawlers eventually render JavaScript?
Possibly, but it's not inevitable. Rendering is expensive, and AI systems may find alternative approaches: partnerships for content access or different data collection methods. Don't wait for AI crawlers to evolve; make your content accessible now.
Does this affect AI Overviews in Google Search?
No. Google AI Overviews use Googlebot's rendering infrastructure, so JavaScript-rendered content is accessible. However, third-party AI systems (ChatGPT, Claude, Perplexity) cannot render JavaScript, so the same content may be visible in AI Overviews but invisible in those platforms.
Can I use dynamic rendering to serve different content to AI crawlers?
Technically possible, but problematic. Dynamic rendering (serving pre-rendered HTML to bots while serving JavaScript to users) adds complexity and potential cloaking concerns. Google considers it a workaround, not a best practice. Server-side rendering is a more sustainable solution.
How do I know which of my pages have this problem?
Audit your site by disabling JavaScript and documenting which pages lose critical content. For large sites, automated testing can compare server-rendered HTML against rendered DOM content across page templates. Prioritise templates with the highest traffic or strategic value.
Key takeaways
-
AI crawlers don't render JavaScript: ChatGPT, Claude, Perplexity, and most AI systems see only raw HTML. This means sites need a distinct AI visibility audit alongside traditional SEO checks.
-
Google visibility doesn't guarantee AI visibility: Googlebot renders JavaScript; AI crawlers don't. A site can rank in Google Search while being completely invisible to ChatGPT.
-
Test by disabling JavaScript: What your page shows without JavaScript is what AI crawlers see. If critical content disappears, you have a rendering problem.
-
Server-side rendering is the fix: SSR, SSG, or ISR ensures content exists in the initial HTML response, accessible to all crawlers regardless of rendering capability.
-
Prioritise strategically: AI crawlers are unlikely to invest in rendering infrastructure soon. Sites that fix rendering gaps now gain a visibility advantage over competitors still waiting for crawlers to catch up.
Further reading
- The Rise of the AI Crawler (Vercel)
Primary research on AI crawler behaviour, including JavaScript rendering capabilities and crawl patterns - Next.js Data Fetching documentation
Implementation patterns for server-side rendering in React applications - OpenAI crawler documentation
Official documentation for GPTBot, ChatGPT-User, and OAI-SearchBot user agents