Modern JavaScript frameworks enable rich user experiences but can render content invisible to search engines. This article covers how search engines process JavaScript, the trade-offs between rendering strategies, and implementation approaches for React, Vue, and Next.js.
Why JavaScript SEO matters
Modern web development has shifted heavily toward JavaScript frameworks: React, Vue, Angular, Next.js, Nuxt. These tools enable rich, interactive user experiences but introduce complexity for search engine indexing.
The core challenge: search engines need to see your content to index it. When content is rendered client-side via JavaScript, there's no guarantee search engines will execute that JavaScript, wait for API responses, or see the final rendered state.
Google has invested heavily in JavaScript rendering capabilities, but this doesn't mean all JavaScript-rendered content is indexed reliably. Understanding rendering strategies and their SEO implications is essential for sites built on modern frameworks. This is a core area of technical SEO that requires close collaboration between SEO and engineering teams.
How search engines process JavaScript
Google's rendering pipeline
Google processes JavaScript pages through a two-phase system:
- Crawling and initial HTML parsing: Googlebot fetches the URL and parses the initial HTML response
- Rendering: The page enters a render queue where Google's Web Rendering Service (WRS) executes JavaScript
Critical point: There's a delay between crawling and rendering. Google's render queue processes pages based on available resources, which means JavaScript-dependent content may not be indexed for hours, days, or sometimes weeks after initial discovery.
HTTP status codes and rendering
HTTP status codes determine whether a page enters the rendering queue:
- All pages with a 200 HTTP status code are sent to the rendering queue, regardless of whether JavaScript is present on the page
- Non-200 status codes (e.g., 404, 500) may skip rendering entirely: Google may not execute JavaScript on error pages
- Pages are only excluded from rendering if a robots meta tag or
X-Robots-Tagheader tells Google not to index the page
This has practical implications: if your error pages rely on JavaScript to display content (such as helpful navigation or search suggestions), that content may never be rendered. Ensure error pages include meaningful content in the initial HTML response. JavaScript applications face additional challenges with error handling (see the section on soft 404s under URL structure and routing).
What Google's renderer can and cannot do
Capabilities:
- Execute modern JavaScript (ES6+)
- Process most popular frameworks (React, Vue, Angular)
- Handle common APIs (fetch, XMLHttpRequest)
- Execute JavaScript up to a timeout threshold
Limitations:
- No persistent state between page loads
- Limited interaction with user-triggered events (clicks, scrolls)
- Timeout constraints on JavaScript execution
- No access to localStorage/sessionStorage data from previous sessions
- Cannot handle infinite scroll without explicit links
Viewport expansion and layout quirks
Google's rendering behaviour includes a quirk that catches many developers off guard: viewport expansion.
- Initial render: Google renders the page using a fixed viewport (e.g., 1024×1024 for desktop or 412×732 for mobile)
- Viewport expansion: Google then expands the viewport height to match the full page length
- Re-render: This triggers recalculation of viewport-relative CSS units and may activate lazy-loaded elements
This expansion behaviour is unique to crawlers; real users never experience it. The consequence is that layout can shift dramatically between the initial render and the expanded state. Research by Merj demonstrates how this quirk creates a widening gap between what users see and what crawlers interpret.
The 100vh trap:
A common pitfall is using 100vh for hero sections or full-screen elements. When viewport expansion occurs:
- The crawler recalculates
100vhbased on the expanded viewport (potentially thousands of pixels) - A hero section intended to fill one screen becomes massively oversized
- Primary content gets pushed far down the page, potentially affecting how Google perceives content priority
- Lazy-loaded content beneath the hero may never trigger if the activation threshold isn't reached
/* Problematic: hero grows with viewport expansion */
.hero {
height: 100vh;
}
/* Better: cap maximum height to prevent crawler distortion */
.hero {
height: 100vh;
max-height: 800px;
}
Lazy loading that works for crawlers:
Most crawlers don't scroll or trigger user events. This means:
- Scroll-based lazy loading fails silently
touchstartorwheel-based loading never activates- Content intended for indexing may never appear in the rendered DOM
Safer patterns include:
- Using the Intersection Observer API (Google's viewport expansion triggers intersections)
- Providing static HTML fallbacks for critical content
- Avoiding scroll-triggered content loading for SEO-critical elements
Other search engines
While Google has sophisticated rendering capabilities, other search engines vary significantly:
| Search engine | JavaScript rendering |
|---|---|
| Full rendering via WRS | |
| Bing | Limited rendering, prefers server-rendered HTML |
| Yandex | Basic rendering support |
| Baidu | Minimal JavaScript support |
| DuckDuckGo | Uses Bing's index (limited rendering) |
If international SEO or non-Google traffic matters to your site, relying solely on client-side rendering is risky.
Systems that don't render JavaScript
Google's rendering capabilities are the exception, not the norm. Most systems that fetch web content only see the initial HTML response. Anything injected via JavaScript is invisible to them.
Social media platforms don't execute JavaScript when generating link previews. Open Graph tags, Twitter Card markup, and preview images must be present in the initial HTML. If these meta tags are injected client-side, shared links display with missing or broken previews.
AI training crawlers typically fetch static HTML without rendering. GPTBot, ClaudeBot, CCBot, and similar crawlers collect content for model training but generally don't execute JavaScript. Content rendered client-side won't be included in training datasets. For detailed coverage of AI crawler behaviour and access control, see AI Crawlers and Access Control.
Feed readers and aggregators parse RSS/Atom feeds and often fetch linked pages without JavaScript execution. If your feed links to JavaScript-rendered content, aggregators may display incomplete previews.
Accessibility tools and screen readers work with the DOM, but users on slow connections or older devices may experience delays or failures in JavaScript execution, affecting what content they can access.
Rendering strategies
Client-Side Rendering (CSR)
With CSR, the server sends a minimal HTML shell, and JavaScript constructs the page content in the browser.
<!-- Initial HTML response -->
<!DOCTYPE html>
<html>
<head>
<title>My App</title>
</head>
<body>
<div id="root"></div>
<script src="/bundle.js"></script>
</body>
</html>
SEO implications:
- Search engines see only the HTML shell initially
- Content depends entirely on render queue processing
- Delays between crawling and indexing
- Risk of incomplete rendering due to timeouts or errors
When CSR is acceptable:
- Authenticated/personalised dashboards (not meant for indexing)
- Internal tools
- Applications where SEO is not a priority
When CSR is problematic:
- Content-driven sites (blogs, news, documentation)
- E-commerce product pages
- Any page targeting organic search traffic
Server-Side Rendering (SSR)
With SSR, the server executes JavaScript and sends fully-rendered HTML to the client. The page is then "hydrated" with JavaScript for interactivity.
<!-- Server-rendered HTML response -->
<!DOCTYPE html>
<html>
<head>
<title>Product Name | My Store</title>
<meta name="description" content="Full product description...">
</head>
<body>
<div id="root">
<h1>Product Name</h1>
<p>Complete product content visible immediately...</p>
<!-- Full content present in HTML -->
</div>
<script src="/bundle.js"></script>
</body>
</html>
SEO implications:
- Content visible in initial HTML response
- No dependency on client-side rendering for indexing
- Faster time-to-index
- Reliable across all search engines
Trade-offs:
- Higher server load (rendering on each request)
- Time to First Byte (TTFB) may increase
- More complex infrastructure
Static Site Generation (SSG)
SSG pre-renders pages at build time, generating static HTML files that are served directly.
SEO implications:
- Optimal for search engines: pure HTML, no rendering required
- Fastest possible response times
- CDN-friendly for global distribution
- Content always available, regardless of JavaScript execution
Best suited for:
- Marketing pages
- Documentation sites
- Blogs and content sites
- Product catalogues with stable content
Limitations:
- Content is fixed at build time
- Large sites may have long build times
- Dynamic content requires rebuilds or hybrid approaches
Incremental Static Regeneration (ISR)
ISR (popularised by Next.js) combines static generation with on-demand regeneration. Pages are pre-built but can be regenerated after deployment when content changes.
// Next.js example
export async function getStaticProps() {
const data = await fetchProductData();
return {
props: { data },
revalidate: 3600, // Regenerate every hour
};
}
SEO implications:
- Static HTML benefits without full rebuilds
- Balances freshness with performance
- Ideal for large catalogues with periodic updates
Hydration explained
Hydration is the process of attaching JavaScript event handlers and state to server-rendered HTML. The HTML is already present; hydration makes it interactive.
1. Server renders HTML ──▶ Browser receives complete HTML
2. Browser displays HTML ──▶ User sees content immediately
3. JavaScript loads ──▶ Framework "hydrates" the HTML
4. Page becomes interactive ──▶ Click handlers, state management active
Why hydration matters for SEO:
- Content is visible before JavaScript executes
- Search engines see full content in initial HTML
- User experience is improved (faster perceived load)
- Core Web Vitals benefit from faster Largest Contentful Paint (LCP)
Hydration pitfalls:
- Hydration mismatch: server HTML differs from client render (causes re-render)
- Large JavaScript bundles delay interactivity (poor Time to Interactive)
- "Uncanny valley": page looks ready but isn't interactive yet
Dynamic rendering (use with caution)
Dynamic rendering serves different content to search engines versus users: pre-rendered HTML to bots, client-rendered JavaScript to browsers.
Risks of dynamic rendering:
- Cloaking concerns if content differs significantly
- Maintenance overhead (two rendering paths)
- User-agent detection can fail or be circumvented
- Doesn't solve the underlying architectural problem
When it might be justified:
- Legacy applications where SSR migration is impractical
- Temporary solution while implementing proper SSR
- Very large sites with prohibitive SSR infrastructure costs
Framework-specific guidance
React
Default behaviour: Client-side rendering
SEO-friendly options:
- Next.js: Framework with built-in SSR, SSG, and ISR
- Gatsby: Static site generator for React
- React Server Components: Newer approach for server rendering
// Next.js static generation
export async function getStaticProps() {
const posts = await getBlogPosts();
return { props: { posts } };
}
// Next.js server-side rendering
export async function getServerSideProps(context) {
const product = await getProduct(context.params.id);
return { props: { product } };
}
Vue
Default behaviour: Client-side rendering
SEO-friendly options:
- Nuxt.js: Framework with SSR and SSG support
- VuePress/VitePress: Static site generators for documentation
// Nuxt static generation
export default {
target: 'static',
generate: {
routes: ['/page1', '/page2', '/page3']
}
}
// Nuxt server-side rendering
export default {
ssr: true,
target: 'server'
}
Next.js routing and static paths
Next.js requires explicit definition of static paths for dynamic routes:
// pages/products/[slug].js
// Define which paths to pre-render
export async function getStaticPaths() {
const products = await getAllProducts();
return {
paths: products.map(product => ({
params: { slug: product.slug }
})),
fallback: 'blocking' // or false, or true
};
}
export async function getStaticProps({ params }) {
const product = await getProduct(params.slug);
return { props: { product } };
}
Fallback options:
false: Only pre-defined paths work; others return 404true: Unknown paths render on first request, show loading state'blocking': Unknown paths render on first request, no loading state (SSR-like)
For SEO, 'blocking' or false are preferred because they ensure search engines receive complete HTML without client-side loading states.
JavaScript and Core Web Vitals
JavaScript directly affects all three Core Web Vitals metrics. Understanding these connections helps diagnose performance issues on JavaScript-heavy sites.
Largest Contentful Paint (LCP)
LCP measures when the largest visible element finishes rendering. JavaScript can delay LCP in several ways:
- Render-blocking scripts: JavaScript files in the
<head>withoutdeferorasyncattributes block HTML parsing until they download and execute - Client-side content: If the largest element (hero image, main heading) is injected via JavaScript, LCP waits for that script to run
- Chained requests: Content that depends on API calls adds network latency before the element can render
Mitigations:
- Server-render above-the-fold content so the LCP element is in the initial HTML
- Use
deferfor non-critical scripts; useasyncfor scripts that don't depend on DOM order - Preload critical resources with
<link rel="preload">
Cumulative Layout Shift (CLS)
CLS measures visual stability: how much elements move after initial render. JavaScript commonly causes layout shifts when:
- Late-loading content: Elements injected after initial paint push other content down
- Font swapping: Web fonts loaded via JavaScript may trigger text reflow
- Dynamic embeds: Third-party widgets, ads, or iframes inserted after page load
- Images without dimensions: When JavaScript-loaded images lack
widthandheightattributes
Mitigations:
- Reserve space for dynamic content with CSS
min-heightor aspect-ratio containers - Include
widthandheighton all images, even lazy-loaded ones - Load third-party scripts after critical content renders
Interaction to Next Paint (INP)
INP measures responsiveness: the delay between user interaction and visual feedback. Heavy JavaScript directly impacts INP:
- Long tasks: JavaScript execution blocking the main thread for 50ms+ delays response to clicks and key presses
- Hydration delays: During hydration, the page looks interactive but event handlers aren't yet attached
- Third-party scripts: Analytics, chat widgets, and other scripts competing for main thread time
Mitigations:
- Break large JavaScript bundles into smaller chunks loaded on demand
- Defer non-essential third-party scripts until after user interaction
- Use web workers for computationally expensive operations
- Consider partial hydration or islands architecture for better interactivity
Progressive enhancement and graceful degradation
These are complementary philosophies for building robust, accessible websites that work across varying browser capabilities.
Progressive enhancement
Start with a baseline functional experience using semantic HTML, then layer on CSS styling and JavaScript interactivity for capable browsers.
Layer 3: JavaScript ──▶ Rich interactions, dynamic updates
↑
Layer 2: CSS ──▶ Visual styling, layout, animations
↑
Layer 1: HTML ──▶ Content, structure, basic functionality
For SEO, this means:
- Core content is in HTML, visible without JavaScript
- Links are real
<a href>elements, not JavaScript click handlers - Forms work with standard submission, not just JavaScript
- Navigation is functional without client-side routing
Example of progressive enhancement:
<!-- Base: functional link -->
<a href="/products/widget" class="product-link">View Widget</a>
<script>
// Enhancement: smooth client-side navigation for capable browsers
document.querySelectorAll('.product-link').forEach(link => {
link.addEventListener('click', (e) => {
if (supportsHistoryAPI()) {
e.preventDefault();
loadPageWithAnimation(link.href);
}
// Otherwise, default link behaviour works fine
});
});
</script>
Graceful degradation
Build for modern browsers first, then ensure the experience degrades acceptably for less capable browsers or when JavaScript fails.
For SEO, this means:
- If JavaScript fails to load, content is still visible
- If rendering times out, critical information remains accessible
- Error states don't result in blank pages
Applying these principles to JavaScript frameworks
Even with JavaScript frameworks, you can maintain progressive enhancement principles:
Navigation:
// Bad: JavaScript-only navigation
<div onClick={() => navigate('/products')}>Products</div>
// Good: Real link with client-side enhancement
<Link href="/products">Products</Link>
// Renders as <a href="/products"> with client-side navigation enhancement
Forms:
// Bad: JavaScript-only form
<form onSubmit={(e) => { e.preventDefault(); submitViaAPI(); }}>
// Good: Works without JavaScript, enhanced with it
<form action="/api/submit" method="POST" onSubmit={handleEnhancedSubmit}>
The same principle applies to content: server-render the base content (as covered in the rendering strategies section), then enhance with JavaScript for dynamic updates or interactivity.
Structured data and metadata
Structured data (JSON-LD, microdata) enables rich results in search, including star ratings, FAQs, product information, and other enhanced listings. On JavaScript-rendered sites, structured data faces the same visibility challenges as other content.
Server-render structured data
Google can process JSON-LD injected via JavaScript, but server-rendering remains the safer approach:
- Structured data in the initial HTML is parsed during the crawl phase, not dependent on rendering
- Rendering delays, timeouts, or errors won't affect structured data visibility
- Validation tools show exactly what Google receives without rendering ambiguity
<!-- Server-rendered: visible immediately -->
<script type="application/ld+json">
{
"@context": "https://schema.org",
"@type": "Product",
"name": "Widget Pro",
"aggregateRating": {
"@type": "AggregateRating",
"ratingValue": "4.5",
"reviewCount": "127"
}
}
</script>
Metadata handling across rendering
Canonical tags, meta robots directives, and other metadata can behave unexpectedly when JavaScript modifies them:
- Multiple conflicting signals: If the initial HTML contains one canonical and JavaScript injects another, Google receives conflicting signals and may ignore both
- Most restrictive directive wins: A
noindexin either the raw HTML or rendered DOM results in the page not being indexed, regardless of what the other state shows - Title and description overwrites: JavaScript can update
<title>and meta descriptions, but if rendering fails, Google uses the initial HTML values
URL structure and routing
JavaScript frameworks handle routing differently from traditional server-rendered sites. Some routing patterns create significant SEO problems.
Hash-based routing
Hash fragments (#) have a defined purpose in URLs: they link to a specific location within a page. Servers ignore everything after the # character. They never see it.
Some JavaScript frameworks, particularly older versions of Angular and Vue in default configuration, use hash-based routing:
example.com/#/products
example.com/#/products/widget
example.com/#/about
To the server (and to search engines during the initial crawl), these are all the same URL: example.com/. The content after # is invisible until JavaScript executes. This means:
- Search engines may not recognise these as separate, indexable pages
- Links to hash-based URLs don't pass signals as effectively
- Sharing these URLs on platforms that don't execute JavaScript fails
Solution: Configure your router to use the History API (also called "HTML5 mode" or "history mode"), which produces clean URLs without fragments:
example.com/products
example.com/products/widget
example.com/about
Soft 404s and error handling
JavaScript applications can't return server-side HTTP status codes directly. The HTML shell returns 200, and JavaScript handles everything else. This creates problems for error states:
- A "Page not found" message rendered by JavaScript still returns HTTP 200
- Google may index these as legitimate pages or treat them as soft 404s
- Crawl budget is wasted on error pages that should return 404
Solutions:
- Redirect to a server-rendered 404 page that returns the correct status code
- Use
noindexmeta tags on JavaScript-rendered error pages (Google treats these as soft 404s) - For SSR frameworks, return proper 404 status codes from the server
Technical checklist for JavaScript SEO
Pre-launch verification
- [ ] Core content visible in initial HTML response (View Source, not Inspect)
- [ ] All important links are crawlable
<a href>elements - [ ] Meta tags (title, description, canonicals) present in server-rendered HTML
- [ ] Structured data present in initial HTML, not injected by JavaScript
- [ ] No critical content behind user interactions (clicks, scrolls, tabs)
Testing tools
- Google Search Console URL Inspection: See how Google renders your page
- Rich Results Test: Includes rendered HTML view
- Mobile-Friendly Test: Shows rendered page screenshot
- Chrome DevTools "Disable JavaScript": See what content exists without JS
Monitoring
- [ ] Check Search Console for indexing issues
- [ ] Monitor "Discovered - currently not indexed" for JavaScript-heavy pages
- [ ] Compare View Source vs rendered DOM for critical pages
- [ ] Track Core Web Vitals, especially LCP and INP
Common JavaScript SEO mistakes
| Mistake | Problem | Solution |
|---|---|---|
| Links as click handlers | Not crawlable | Use <a href> elements |
| Content loaded on scroll | May not be rendered | Include in initial HTML or use pagination links |
| Client-side redirects | May not be followed | Use server-side (301/302) redirects |
| Meta tags via JavaScript | May be missed or conflict with HTML values | Server-render all critical metadata |
Hash-based routing (#/page) |
URLs invisible to servers and crawlers | Use History API (see URL structure section) |
| Blocking robots on JS files | Prevents rendering | Allow Googlebot to access JS/CSS |
| Infinite scroll only | Content beyond initial load invisible | Add paginated alternatives |
| 100vh hero sections | Expand massively during crawler viewport expansion | Use max-height cap alongside 100vh |
| Scroll-triggered lazy loading | Crawlers don't scroll; content never loads | Use Intersection Observer or static fallbacks |
FAQs
What causes Google's renderer to fail?
Even when pages enter Google's render queue, rendering can fail silently. Common causes include JavaScript execution timeouts, blocked resources (if robots.txt prevents access to critical JS files), uncaught exceptions that halt script execution, and dependencies on APIs that require authentication or return errors. When rendering fails, Google falls back to the initial HTML, potentially indexing incomplete content without any warning in Search Console.
How long does Google take to render JavaScript content?
Google's internal data suggests a median render queue time of around five seconds, with most pages rendered within minutes. However, rendering delays can extend to hours or days during high-load periods or for lower-priority pages. Server-rendered content is indexed immediately without this queue dependency.
Will client-side rendering hurt my rankings?
Not directly. If Google successfully renders your content, it's treated equivalently to server-rendered content. The risks are indirect: rendering delays mean slower indexing, rendering failures mean missing content, and Core Web Vitals penalties affect user experience metrics. SSR and SSG eliminate these risks.
Should I block JavaScript files in robots.txt?
No. Blocking JavaScript files prevents Google from rendering your pages correctly. If Googlebot can't access the scripts that build your content, it sees only the empty HTML shell. Allow access to all JavaScript and CSS files required for page rendering.
Key takeaways
- Server-render critical content: Don't rely on client-side rendering for content you want indexed. This includes text, links, structured data, and metadata. If disabling JavaScript makes content disappear, many systems will never see it
- Choose your rendering strategy by content type: SSG for stable content (best performance and indexability), SSR when content changes frequently or is personalised per request
- Assume most systems don't render JavaScript: Only Google and modern browsers reliably execute JavaScript. AI training crawlers, social platforms, and other search engines typically see only the initial HTML
- JavaScript affects Core Web Vitals directly: Render-blocking scripts delay LCP, late-loading elements cause CLS, and heavy JavaScript harms INP. Optimise for performance alongside indexability
Further reading
- Google's JavaScript SEO documentation
Official guide to how Googlebot processes JavaScript content - Rendering on the Web (Google Developers)
Comprehensive comparison of SSR, SSG, CSR, and hybrid approaches - Rendering, Style, and Layout: When Things Go Wrong (Merj)
Giacomo Zecchini's research on viewport expansion, the 100vh trap, and how crawler rendering differs from user rendering - Next.js Documentation on Data Fetching
Implementation patterns for getStaticProps, getServerSideProps, and ISR - Vue SSR Guide
Server-side rendering setup for Vue and Nuxt applications