Dealing with crawl errors
News Courtesy of Yoast.com:
Crawl errors occur when a search engine tries to reach a page on your website but fails at it. Let’s shed some more light on crawling first. Crawling is the process where a search engine tries to visit every page of your website via a bot. A search engine bot finds a link to your website and starts to find all your public pages from there. The bot crawls the pages and indexes all the contents for use in Google, plus adds all the links on these pages to the pile of pages it still has to crawl. Your main goal as a website owner is to make sure the search engine bot can get to all pages on the site. Failing this process returns what we call crawl errors.
Crawl errors can range from being a small issue to a major problem. In order to become aware of any issues on a website you manage it is crucial to set up Google Webmaster Tools. It’s a free tool that helps with not just these issues but also can detect malware to a certain degree. You’ll get email alerts notifying you of any issues that have been detected. You may have double and triple-checked your website pages, but errors can occur at any time. I’ll still get emails from Webmaster Tools (Search Console), mostly warning me of index coverage issues.
These problems can be triggered due to misconfigurations in your DNS records, robots.txt file, Apache (500 error code for example), and so forth. I highly recommend that you have a monitoring service to assist in detecting issues with your websites. Check out my review of Statuscake, which offers unlimited website monitoring for free.
In closing, don’t ignore these errors. Even if there isn’t a problem rending the page, it is still vital that Google can access your website and pages quickly. You might be doing a great job at internal linking only to have Googlebot stop dead in its’ tracks due to a crawl error.