August 10, 2015 in Marketing

Blocked CSS and JSI recently received an email from a client that included a message they had received from Google with an ominous warning in the subject line that “Googlebot cannot access CSS and JS files” on the client’s website.  The body of the email warned us that “Googlebot cannot access your JavaScript and/or CSS files because of restrictions in your robots.txt file.”  So, just what does that mean, wondered my client.

The message arrives from the Google Search Console (formerly Google Webmaster Tools) and after doing more digging, I found that this is a new push by Google to inform website owners that blocking these important files can hinder Google’s ranking of your site.

Ok, cut through the tech jargon.  What does this mean?  In short, CSS and JS files are files on your server that make sure your website displays as intended, and functions properly.  Since these files are basically computer code, it doesn’t make a whole lot of sense to the average person, and many web developers put a line in their code (specifically, in a file called robots.txt) that tells search engines to ignore the CSS and JS files, so they don’t end up showing up in search results.

Since we’re telling search engines to ignore these pages, that means that when Google looks at your website, it scans the homepage, and sees that in order to display the page properly, it needs to also load these JS and CSS files, but then Google sees that it is being told to ignore those files.  That leaves Google with an incomplete picture of what your site looks like.  It still can read and index the content on your site, which is the most important thing, but Google is now wanting a more complete view of your site, including how things are displayed on the page.  In order for that to happen, it needs to be given permission to view those CSS and JS files.

So, if you’re ready to unblock these files, or want to see if you currently are:

  1. Make sure you have your site set up with Google Search Console
  2. Within Search Console, on the left click on Crawl -> Fetch as Google
  3. Leave the URL space blank to fetch your homepage, and click on Fetch and Render
  4. After the fetch is complete, click on the results to see the rendering. This will show you how Google sees your page vs. how a visitor to your site sees it.  Below the rendering, you’ll see a list of any blocked items.
  5. If your CSS and/or JS files are being blocked, you’ll need to edit the robots.txt file. (You will probably want to get a developer to do this.)  Here’s what you’ll need to add to your robots.txt file:
    • User-Agent: Googlebot
      Allow: .js
      Allow: .css

That should solve the issue and now Google will be able to fully render and understand the appearance and structure of your website.