11 Feb '16

Nutritionists advise that we should start our day with a good breakfast – they are wrong. You should always start your day with a crawl of your site through Screaming Frog, to gain data that will serve your organic strategy, but also for pure peace of mind that your site is still running as it should be. Screaming Frog is a powerful tool with lots of functionality, but it can be a little intimidating at first. This free tool (it does have a more powerful pro-version), will crawl your site like search engines and provide critical data that’ll help you better understand how your site works. To help you better understand the tool – and why crawling your site is a good idea – we have an exhaustive breakdown that’ll make you a better Webmaster and SEO First thing’s first. Let’s learn how to actually crawl a site using the tool and then break into more advanced uses.

How to crawl the whole site:

In order to ensure that screaming frog crawls your entire site, you must change the default settings of the tool in the Spider Configuration menu. screaming-frog-how-to-1 Check off the ‘Crawl All Subdomains’ option to ensure that the spider crawls all links that it comes across on your site. You can also uncheck images, CSS, Javascript, SWF, and external links to assist in making the crawl run faster. Once you’ve changed these options, simply place your domain in the top field and let Screaming Frog do the work. It’ll pull lots of great information for you on all the indexable pages on your site – meta titles, descriptions, headlines, page length – all in a format that’s easy to sort and manipulate. If we’re working on a new client’s website, the first thing we do is run their domain through this step and analyze the data. screaming-frog-how-to-2   Crawling a single subdirectory Screaming Frog also allows you to restrict your crawl to a single folder, which you can do by entering the specific URL you wish to start crawling from and avoid changing any default settings. Excluding subdomains or subdirectories Screaming Frog also allows options for restricting crawls to sets of subdomains and subdirectories through the use of RegEx in the exclude list configuration menu to either include or exclude particular areas of the site. In the example we show below, the folder containing our portfolio (“Our Work”) has been excluded using RegEx. For very large sites, this can help segment the data to help you isolate the performance of specific areas of the site. screaming-frog-how-to-3 Exporting lists of data from Screaming Frog Screaming Frog allows you to dive deeper and granularly into particular types of data. The crawl data can be filtered to the following; HTML, CSS, Images, JavaScript, PDF, Flash, or more and exported directly. screaming-frog-how-to-4These basic steps can provide more information than you can know what to do with. Here are some helpful first steps with the data you’ve collected:

  • Ensuring that every page on your site has proper titles and descriptions.
  • Find pages that have less than 250 words and beef up their on-page content.
  • Identify any 404 errors that may have gone unnoticed.
  • Analyze your Site Structure to make sure the majority of pages are less than three clicks from the homepage.

Good luck! Once you’ve mastered the basics, read our follow-up post, Advanced Screaming Frog capabilities.

Older Post Blog Home Newer Post
About the Author

Help! I have a Quick Question.
Inc. 5000 Google Premier Partner Microsoft Advertising Houston Business Journal Best Places to Work Brightedge Certified Professional
Top Desktop Tablet Mobile