How client-side rendering became the default
The move to client-side rendering made sense when it happened. AngularJs and later React changed how people thought about building interfaces. The pitch was real: better developer experience, clean separation between frontend and backend, cheap static hosting. You could deploy a bundle to a CDN, let the browser do the heavy lifting, and scale without managing servers.
The technical outcome was a generation of websites that, on first request, deliver almost nothing. A thin HTML shell, a JavaScript bundle, and a blank container waiting to be filled. The content assembles in the browser once the script runs.
For a human visitor with a modern browser, this is invisible. For a crawler that does not execute JavaScript, it is the whole story.
What AI crawlers actually see
Googlebot operates on a two-pass system. It fetches the raw HTML first, then returns later to render JavaScript. Slow, but eventually complete.
GPTBot, ClaudeBot, and PerplexityBot work differently. They fetch the page once, read what is in the HTML response, and stop. An analysis of 569 million GPTBot requests found zero evidence of JavaScript execution. The crawlers sometimes download JavaScript files, but they do not run them. [Source: AI Crawlers & JavaScript Rendering, SearchViu 2025]
For a client-side rendered site, that means these crawlers see nothing. An empty container and a script tag. No headings, no body copy, no structured content of any kind.
Insight
New data from Ahrefs makes the stakes clearer. Total search traffic across 75,000 websites in their panel dropped 7.5% over the last eight months (494M to 457M visits). In the same window, traffic from AI chatbots grew 27%, from 2.9M to 3.7M. The direction of travel is not ambiguous.
But neither is the gap: organic search still dwarfs every other channel by volume. The “SEO is dead” narrative is wrong. What is happening is more specific, and more actionable than that.
A gap no dashboard flags
What is striking about this is how invisible it is for humans as well as for SERP. A site can hold reasonable search rankings, Google eventually renders the JS, and simultaneously have zero presence in AI-generated answers. Search Console shows no problem. Nobody sends an alert.
You can verify the gap using citation tracking tools like Promptwatch or Rankshift (amongst others), which show whether your content is actually appearing in AI-generated responses. Most teams I talk to have not checked.
The reason to check now rather than later is the compounding effect. AI systems tend to cite sources that have been cited before. Early visibility reinforces itself. A site that is invisible to AI crawlers today is not simply missing some traffic, it is also less likely to accumulate the citation history that would help it appear in future. GEO (generative engine optimisation) is the name that has stuck for this problem, and unlike traditional SEO, it currently still depends directly on what is in the initial HTML response. Server-side rendering (aka SSR) is thus not optional for it.
The edge argument
The standard case against SSR was always about infrastructure overhead. More server load, more latency, harder to scale. It was a fair concern in 2016.
Edge computing has made it obsolete. Cloudflare Workers runs across more than 300 data centres globally and executes in under 5 milliseconds. SSR at the edge delivers the initial HTML response faster than a client-side bundle can download and run in the browser. The latency argument has reversed. I covered the edge performance case in more detail in the context of personalisation (Your personalisation is costing you sales) and the same infrastructure logic applies here.
The cost model has also shifted. Edge computing is really affordable and pay-per-use. No idle servers, no capacity planning. The infrastructure objection that made SSR a hard sell for years no longer holds the same weight.
The practical check
Want to check your website yourself for what it yields without Javascript?
Open it with JavaScript disabled, or look at the raw HTML in DevTools before scripts execute. If the content is there: headings, body copy, structured information, you are in a workable position. If you see an empty container and a bundle reference, that is the gap.

Modern frameworks handle this differently. Remix, Next & Astro (e.g. two-point-o.com) default to server-side or static rendering. A standard Create React App setup is pure client-side. Nuxt and SvelteKit support SSR but need deliberate configuration. The question is not which framework your team chose. It is more about which mode it is running in.
If you want to go further than rendering and think about how AI agents interact with your site structurally, that is a separate but related question, one I looked at in Your website was built for humans. AI agents have different needs.
Want to talk about how to get this right, let's talk.
Contact us