Google is introducing 4 changes to the Search Console coverage report based on user feedback.

Coverage report

Why does Google Search Console adjust dita?

Over the past 2 years, Google has been collecting feedback from the webmaster community to provide improved information. There were quite a few uncertainties about certain reports and Google itself has the following to say about this.

“Based on the feedback we got from the community, today we are rolling out significant improvements to this report so you’re better informed on issues that might prevent Google from crawling and indexing your pages. The change is focused on providing a more accurate state to existing issues, which should help you solve them more easily. ”

The adjustments

The 4 things that Google tackles are the following:

  1. Generic issues such as “Crawl anomaly” are broken down into more specific issues. For example a 4xx or 5xx error. This allows you to see where an error takes place sooner. Do you want to know where all these errors occur? Then Screaming Frog is a handy tool to track it down.
  2. A new issue called: “Indexed without content” is introduced.
  3. Soft 404 errors are reported more accurately.
  4. Submitted pages that have been blocked by means of a robots.txt are no longer displayed as a warning but as an error.
Coverage report

Why does this matter?

In front of SEO specialists this is interesting as the above changes give more insight into the way Google looks at your website. In addition, these issues can be resolved more easily because the problem is immediately visible.

Especially the new issue “Indexed without content” provides more insight into whether Google reads your page completely. This is what Google itself says about the new issue on the Search Console help page .

This page appears in the Google index, but for some reason Google could not read the content. Possible reasons are that the page might be cloaked to Google or the page might be in a format that Google can’t index. This is not a case of robots.txt blocking. ”

In short, your page may contain an error which is not a robots.txt which is preventing Google from reading the page. This could be an empty page that you accidentally published, or a script that Google cannot read.