Recovering 1.8M Product Pages from JavaScript Rendering Failure
A framework migration left most of their product catalogue invisible to Google. The culprit wasn't obvious—and the fix wasn't what anyone expected.
Key Results
- ~70% organic traffic recovered
- 1.8M pages back in the index
- 6 weeks to full recovery
What happened
About three weeks after the client launched a redesigned product catalogue on a new JavaScript framework, their organic traffic had dropped by roughly 40%. The pattern was telling once I looked closer: product pages—the ones that actually drive revenue—were losing visibility fast, while category pages seemed fine.
The internal team had already gone through the usual suspects. Redirects looked correct. Canonicals pointed where they should. The pages rendered perfectly in a browser. But Search Console told a different story: indexed pages were declining week over week, and impressions for product queries had basically collapsed.
The business impact was real—somewhere in the range of £350-400K in lost organic revenue in the first month alone, with projections looking much worse if this continued.
What I found
Here's the frustrating part: you couldn't see this problem just by loading the pages in Chrome DevTools. It only showed up during Googlebot's rendering process.
The new React-based product pages were fetching all the important content—titles, descriptions, prices, availability—from API calls that finished after the initial page load. That's fine for users, but it turns out Googlebot has stricter timeout constraints than a regular browser.
Using Google's URL Inspection tool in "live test" mode, I compared what Googlebot actually rendered against what the page should show. The difference was stark: Googlebot's version had placeholder elements everywhere the product data should have been.
The root cause? A third-party analytics script was blocking the main thread during a critical window. On a normal connection, the delay was maybe 2-3 seconds—barely noticeable to a user. But Googlebot's rendering infrastructure operates with tighter timeouts. That delay meant the product content simply never populated before Googlebot took its snapshot.
The constraints I had to work with
This wasn't going to be a straightforward fix, and honestly, the first approach I considered wouldn't have worked anyway:
- No SSR in the architecture: The whole thing was client-side rendered. Rebuilding for server-side rendering would've taken the engineering team 8-10 weeks—time they didn't have.
- Couldn't remove the blocking script: The analytics integration was contractually required for their marketing attribution. Taking it out wasn't an option.
- A major sale was coming: There was a deployment freeze in 4 weeks. Whatever fix we came up with had to be small enough to ship quickly and stable enough not to risk the sale.
The approach
Given those constraints, I recommended a hybrid solution that didn't require SSR or removing the problematic script. The engineering team implemented the following under my guidance:
1. Script deferral
The analytics integration was restructured so it wouldn't block anything critical. They moved the initialisation to fire after DOMContentLoaded, which meant product content could hydrate before any third-party JavaScript ran.
2. Pre-rendering the essentials They added a build-time step that baked the core product data—title, description, main image, price—directly into the HTML. This content was visible to crawlers immediately, while React still enhanced it with dynamic features once it loaded.
3. Dynamic rendering for the edge cases About 15% of pages had genuinely dynamic content (real-time stock levels, personalised pricing). For those, the team set up a lightweight dynamic rendering solution using Cloudflare Workers. It served pre-rendered content to bots while giving users the full SPA experience.
Tech stack: React, Cloudflare Workers, Cloud Crawler, Google Search Console URL Inspection API
The outcome
The staged rollout started in week 2, and everything was deployed by week 4:
- Indexing: Those 1.8M product pages came back into the index over about 6 weeks
- Traffic: Recovered to roughly 70% above the post-migration low—and actually ended up about 10-15% higher than pre-migration levels
- Revenue: The client estimated around £500K in recovered organic revenue that quarter
- Side benefit: Core Web Vitals improved too—LCP dropped from over 4 seconds to around 2 seconds, as a byproduct of the pre-rendering work
What I'd tell someone facing this
Test with Googlebot, not just your browser. The URL Inspection tool's "live test" feature shows you what Googlebot actually sees after rendering. This should be standard in any pre-launch QA for JavaScript-heavy sites. It catches things that look fine in DevTools but break for crawlers.
Don't overlook third-party scripts. Most performance monitoring focuses on your own code. Third-party scripts can introduce blocking behaviour that only shows up under specific conditions—like Googlebot's rendering environment.
Hybrid approaches often work better than "pure" ones. The choice isn't always between full SSR and pure client-side rendering. Pre-rendering the critical stuff combined with selective dynamic rendering gave us the best balance of engineering effort, performance, and SEO outcomes.
"Honestly, we'd already assumed it was a penalty and were bracing for the worst. Having someone explain exactly why it was happening—and that we didn't need a full rebuild—made all the difference."
— Head of Product, Brazil E-commerce Platform
Struggling with JavaScript rendering issues?
Let's diagnose whether your content is actually reaching search engines.
Get in Touch