The latest website update set “noindex” to the production site, changed Robots.txt file or rewrote canonical tags. Does it sound familiar? We’ve all been there.
However, how do you spot if something is wrong? A drop in rankings? Organic traffic is down? Lower Revenue? It’s too late if you notice any of these. The business is losing money. The speed of discovery issues that affect search engines is critical.
It’s impossible to stay up-to-date with all the changes and updates that happen by manually checking them. It’s not desirable to spend time on these tasks. It’s not effective. Automate them.
Automating recurring manual tasks frees up your time for things that provide more value to the business.
There are two approaches to this – monitoring and testing.
- Monitoring is checking the production site regularly.
- Testing is checking code updates before deploying to the production.
Fast discoveries and subsequent fix releases reduce the negative impact on organic traffic and revenue.
Monitoring is checking the production site regularly. For example, the monitoring solution of your choice checks the website twice a day – 8 AM and 4 PM and notifies the team via Slack or email if any of the checks failed.
The team can immediately evaluate the issue, prioritize the required work and deliver a fix in the next release.
Although I recommend collaborating with your web engineers, implementing a monitoring solution doesn’t require their involvement. You can use third-party tools (e.g. Little Warden) or write own script and start monitoring your site today.
If you’re not confident in your coding skills like me, you can use this PHP script from Jaroslav Hlavinka, a resident SEO at Czech search engine Seznam.
Note: The current version of the script has some unsolved situation and contains error messages in Czech. The author has been working hard on a new version that will be fully in English. It should be completed by October 31, 2019. He highly recommends waiting a couple of weeks before downloading it.
Monitoring all pages can be problematic, especially when working with large enterprise websites where the page count is in millions. I monitor several URLs from each page type/template to simplify the task.
- Implementation without the need to involve web engineers.
- The issues are reported after deploying to the production.
Tips for monitoring
Here are some of the rules that I follow:
- Request URLs for each page type/template not to miss page types and templates that are used less often.
- Set the user agent to Googlebot Smartphone to check the same version of the code that Google sees – think of dynamic rendering.
- Verify the values, and not only if an element is present to confirm the site is generating the right values.
Verify SEO elements against predefined rules. Don’t check only if an element is present in the code. Check if it includes the right value. It’s useless to know that the meta robots tag is in the code unless you know if the value is set to “noindex” or “index.”
Testing is checking code updates before deploying to the production site. It needs to be baked into your development process, which may be challenging to do, especially if you’re an external agency.
Work with your web engineers to understand their processes. All teams behind large websites use test automation (software testing). The tools are already there. You just need to work with them on extending their tests to cover the elements important for SEO.
Tests for SEO need to have the power to break the build and prevent the code from merging to the production before the issue is resolved.
Be careful though! You don’t want to be overly restrictive. Differentiate between issues that break the build and warnings. Adding noindex to the homepage should break the build but changing H2 to H3 should only throw a warning (if you even need to test this).
Also, remember that not everything needs to be tested. Components, which are behind an authentication wall don’t need to be tested for SEO reasons.
Types of test automation
There are three main types of tests:
- Unit tests take a small piece of the product and test that piece in isolation. They are fast, reliable and isolate failures.
- Integration tests take a small group of units, often two units, and test their behavior as a whole, verifying that they coherently work together.
- End-to-end tests simulate real scenarios and help easily determine how a failing test would affect users and search engines.
To find the right balance between all three test types, the best visual aid to use is the testing pyramid.
End-to-end tests are the closest to what Googlebot may see and are great for testing SPAs (Single Page Applications), but try to break your testing needs into smaller unit tests. They are faster and help discover potential issues early in the process. Not on the last day of the sprint.
We know headless browsers mainly for their use in Dynamic rendering but they are meant to be automation tools.
New Relic, Jenkins, and Selenium are examples of other tools your engineers may use.
- Flags issues before any damage occur. It lowers the chance of deploying code that would negatively affect the visibility in search results.
- Requires close collaboration with the web engineers and this lengthens the implementation.
Who’s doing what?
Building automated tests is a collaborative effort but it helps to determine who’s owning which part of the process.
- Defines things that should be tested.
- Defines pages that should be tested.
- Defines which ones should break the build.
- Decides what test types are appropriate.
- Writes the testing code.
- Updates the new code if one of the SEO tests fails.
It’s easier to start with monitoring but it should not be either/or choice. The end goal is having automated tests AND monitoring. The combination of both minimizes the risk of not noticing when something goes wrong.
How do you make sure nothing gets accidentally broken over time? We don’t talk about testing and monitoring as often as we should so I’d love to understand your process or hear about tools of your choice.