Indexing vs. Rendering: What Is the Difference?
Why Does This Matter?
Rendering is more important than you might think.
Essentially the reason that it matters is that rendering provides the truth.
With the code, a search engine can understand what a page is about and roughly what’s going on.
With rendering, they can understand the user experience and far more about what content should take priority.
With rendering, they can answer questions like:
- Is content hidden behind a click?
- Does an ad fill the page?
- Is content that appears towards the bottom of the code, actually displayed towards the top or in the navigation?
- Is a page slow to load?
All these questions, and many more, are answered during rendering.
These answers are important to properly understand a page and how it should be ranked.
When Does Rendering Occur?
In 2018, rendering took weeks.
Not surprisingly, it takes far less time now.
Seconds in fact.
How Long Until Google Renders A Page?
The medium is 5 seconds, and within minutes 90% of indexed pages will be through the rendering queue.
It needs to be noted that this is queueing, not necessarily rendering.
That is to say, if you are on the positive side of the medium set that begins within 5 seconds, your page will begin rendering inside 5 seconds, though it may not complete within that time.
If rendering begins in 4 seconds but takes 30 seconds to complete, it would be considered among those counted in the positive side of the medium set.
We’ve come a long long way in 2 years, from weeks to seconds.
Bing operates differently.
When I asked their Web Ranking & Quality Project Manager, Frédéric Dubut, he responded:
The “before” he is referencing was my Tweet from last September:
Presumably, they too have sped things up, though I don’t have a newer confirmation on time.
So, the short answer as to when rendering takes place is: “after indexing” and the timeline is variable but short, essentially meaning that the search engines will understand the content and context of a page prior to gaining a full understanding of how it is to be prioritized, but in most cases the lag is moot.
A big leap forward took place in May of 2019 when Googlebot’s Web Rendering Service (WRS) component was updated.
Until then, the Web Rendering Service was using Chrome version 41.
In May 2019, the Web Rendering Service was upgraded to evergreen, meaning that it uses the most current version of Chrome for rendering (within a couple of weeks at any rate).
Essentially, now when your page is rendered by Google, it’s rendered more-or-less how you would see it in your browser.
What Does a Web Rendering Service Do?
I wanted to quickly answer a question that I found myself not quite wrapping my brain around until I realized I was thinking about it entirely wrong.
You are welcome to laugh at me for the obviousness of the hiccup in my brain.
First, let’s consider where a Web Rendering Services gets its instructions and how.
Here’s basically the life-cycle of rendering:
- A page is discovered via sitemap, crawler, etc.
- The page is added to the list of pages to be crawled on a site when the crawl budget is available.
- The page content is crawled and indexed.
- The page is added to the rendering queue.
- The page in rendered.
So, a critical and unspoken element of the process is the rendering queue.
When a page hits the top of the queue for rendering, the engine will send what is referred to as a headless browser to it.
This is the step I had difficulty with.
A headless browser is a browser without a graphical user interface.
For some reason, I had a difficult time wrapping my brain around how that worked.
Like, how is Google to know what’s there if it’s not graphically displayed?
The obvious answer is:
“The bot doesn’t have eyes either so … um … yeah.”
Over that mental hiccup, I came to terms with it as a “browser light” that renders the page for the search engine to now understand what appears where and how on a page – even though they don’t have eyes to see it.
When all goes well, the rendered version will appear the same to Googlebot as it does to graphical browsers, and if it doesn’t then it’s likely because the page relies on an unsupported feature like a user permission request, or one of the scripts or other resources errored.
What About Pre-Rendering?
Basically, it’s an ethical form on cloaking, where you make a copy of the page as it would appear in the DOM, and serve that to the search engines, to ensure they see the same content that a user does when they stop by to index the content.
“Ask and ye shall sometimes receive” an answer from Google.
And this was one of those times.
The response was:
Which is great news for those running Puppeteer or another pre-rendering library.
I know that I’ve seen cases of the pre-rendering system crashing without an error notification, causing a bunch of headaches (read: pages dropping from the index).
If we don’t need to pre-render, we don’t need to worry about such things.
Of course the operative word here was “generally.”
So if you’re thinking about turning off your pre-rendering system, I have to recommend stopping the system from running on a handful of pages, and wait and see what happens when they get recached.
Does Google see the content as it’s rendered?
If so, you may be able to stop pre-rendering altogether.
Rendering gives the engines the ability to prioritize content based on how a human would likely interact with a page.
It lets the engine know how content is positioned in a browser and how visible different elements are, so when they’re trying to judge or prioritize content or weigh usability, they’re working with the same product a visitor is.