There are two heroes in this post.
You can use your Chrome browser to do quick testing because Google has upgraded its web rendering service (WRS) to Evergreen Googlebot, which is basically the latest version of Chromium (give or take a few weeks).
The other Google tools, such as the URL Inspection tool in the Google Search Console and the Mobile-Friendly tool, still use the old web rendering service powered by Chrome 41 (as of June 6, 2019).
The upgrade significantly enhanced the capabilities of WRS because it brought:
- IntersectionObserver API (great for lazy-loading)
We’ve all seen this diagram John Mueller and Tom Greenaway showed us during Google I/O in 2018.
If your website changes often, waiting for the second wave of indexing may not the best idea. Think of news websites or car dealerships. This delay may hurt websites that rely on fresh content being indexed immediately.
This looks awful, but it was actually not that bad.
All the content is there because it’s a server-side rendered site. We can’t see the content because the opacity of the entire body tag is set to 0.
Opacity is a CSS property that adds transparency to elements.
It’s not ideal, but it’s not mission critical. The website still ranks very well and drives tons of traffic from search engines.
Client-side rendered SPA (Single-page application) always means fun.
As you can see, I was extremely creative when coming up with the names of those two websites.
The solution used for this website to serve static HTML is called dynamic rendering.
Dynamic rendering means switching between client-side rendered and prerendered content for specific user agents.
Prerender.io, Puppeteer and Rendertron are three popular rendering options. All of them use an instance of headless Chrome.
Ensuring that static HTML is served only to web crawlers is done via User Agent detection.
User agent is a string that identifies who makes the request for the page.
You want to detect all the search engines that may drive traffic to your website, such as Google, Bing, and DuckDuckGo. If you care about international markets, throw Yandex, Baidu, Seznam, and others into the mix.
You’re not done yet
The user agent detection is done. The prerendering solution is up and running. You’re all set to receive phenomenal visibility in search results. Aren’t you?
Dynamic rendering is tricky because it’s hard to notice rendering problems when the site is fully functional to users.
How do you notice when something is wrong with the rendering process? A decrease in organic traffic? A drop in rankings?
That’s already too late and the business is losing money.
Before you label the implementation of dynamic rendering as finished, set automated monitoring of the rendering process.
Testing vs. monitoring
- Testing is checking code updates before deploying to the production site. We could talk about unit tests, integration tests, and end-to-end tests for quite some time, but that’s not the goal of this post (if you want to know more, here is a great deck about Software Testing for SEO from Mike King). I just mention that testing needs to be baked into your development process, which may be challenging to do, especially if you’re an external agency.
- Monitoring is checking the production site on a regular basis (i.e. daily). You can use third-party tools or write your own script and start monitoring your site today.
Automated monitoring tips
- Request URLs for each page type/template.
- Set the user agent to Googlebot Smartphone.
- Verify the values, and not only if an element is present.
Verify SEO elements against predefined rules. Don’t check only if an element is present in the code. Check if it includes the right value. It’s useless to know that the meta robots tag is in the code unless you know if the value is “noindex” or “index.”
Rendering success rate
Monitoring of grey.com over time helped us discover problems with the rendering process we wouldn’t have discovered otherwise because the site worked perfectly fine for the users.
The rendering success rate shows the percentage of pages that were rendered correctly.
The higher the number, the happier I am.
Rendering success rate = Number of the successfully rendered pages / Number of the tested pages
Initially, the rendering success rate was only 62.7%. This means we returned static HTML to Googlebot for only 6 pages out of 10.
The other 4 times, we returned this…
There was absolutely no content. It was just this empty layout, and that wasn’t good at all.
We did some analyses and brainstorming with developers, and merged log files with rendering data in Power BI.
We got it right after allowing the site to load with the cookies and storage turned off. The rendering success rate jumped to 99.6%. It’s not perfect, but it’s a number I’m very happy about.
- Verify that content which is supposed to be visible is visible.
- Detect not only search engines but also social media crawlers.
- Set up daily monitoring. It’s very useful even for static websites.
- Watch the rendering success rate over time.