Seo, in its many basic sense, relies upon something above all others: Search engine spiders crawling and indexing your site.
However almost every site is going to have pages that you do not want to include in this exploration.
In a best-case circumstance, these are not doing anything to drive traffic to your site actively, and in a worst-case, they could be diverting traffic from more important pages.
Thankfully, Google allows web designers to inform online search engine bots what pages and content to crawl and what to overlook. There are a number of methods to do this, the most common being using a robots.txt file or the meta robots tag.
We have an excellent and detailed description of the ins and outs of robots.txt, which you must definitely read.
However in top-level terms, it’s a plain text file that lives in your website’s root and follows the Robots Exclusion Procedure (REPRESENTATIVE).
Robots.txt provides crawlers with instructions about the website as a whole, while meta robotics tags consist of instructions for specific pages.
Some meta robots tags you may utilize consist of index, which informs search engines to include the page to their index; noindex, which informs it not to add a page to the index or include it in search engine result; follow, which advises an online search engine to follow the links on a page; nofollow, which informs it not to follow links, and an entire host of others.
Both robots.txt and meta robotics tags are useful tools to keep in your toolbox, however there’s also another way to instruct online search engine bots to noindex or nofollow: the X-Robots-Tag.
What Is The X-Robots-Tag?
The X-Robots-Tag is another way for you to control how your webpages are crawled and indexed by spiders. As part of the HTTP header reaction to a URL, it controls indexing for a whole page, as well as the specific aspects on that page.
And whereas using meta robotics tags is relatively simple, the X-Robots-Tag is a bit more complex.
But this, of course, raises the concern:
When Should You Utilize The X-Robots-Tag?
According to Google, “Any directive that can be utilized in a robots meta tag can also be specified as an X-Robots-Tag.”
While you can set robots.txt-related regulations in the headers of an HTTP action with both the meta robots tag and X-Robots Tag, there are specific scenarios where you would wish to use the X-Robots-Tag– the 2 most common being when:
- You want to control how your non-HTML files are being crawled and indexed.
- You want to serve instructions site-wide rather of on a page level.
For example, if you wish to obstruct a specific image or video from being crawled– the HTTP action method makes this simple.
The X-Robots-Tag header is likewise helpful since it enables you to combine numerous tags within an HTTP response or use a comma-separated list of regulations to specify directives.
Possibly you do not want a specific page to be cached and desire it to be not available after a specific date. You can utilize a mix of “noarchive” and “unavailable_after” tags to instruct online search engine bots to follow these instructions.
Basically, the power of the X-Robots-Tag is that it is a lot more flexible than the meta robotics tag.
The advantage of using an X-Robots-Tag with HTTP reactions is that it permits you to utilize routine expressions to perform crawl directives on non-HTML, as well as use specifications on a bigger, global level.
To assist you understand the difference between these instructions, it’s useful to classify them by type. That is, are they crawler directives or indexer regulations?
Here’s a handy cheat sheet to describe:
|Spider Directives||Indexer Directives|
|Robots.txt– uses the user representative, permit, disallow, and sitemap instructions to define where on-site search engine bots are allowed to crawl and not enabled to crawl.||Meta Robots tag– allows you to define and avoid online search engine from showing particular pages on a website in search engine result.
Nofollow– allows you to specify links that ought to not pass on authority or PageRank.
X-Robots-tag– permits you to manage how specified file types are indexed.
Where Do You Put The X-Robots-Tag?
Let’s say you wish to obstruct particular file types. An ideal method would be to add the X-Robots-Tag to an Apache configuration or a.htaccess file.
The X-Robots-Tag can be added to a site’s HTTP reactions in an Apache server configuration via.htaccess file.
Real-World Examples And Uses Of The X-Robots-Tag
So that sounds fantastic in theory, but what does it look like in the real life? Let’s take a look.
Let’s say we desired search engines not to index.pdf file types. This configuration on Apache servers would look something like the below:
In Nginx, it would appear like the below:
place ~ *. pdf$ add_header X-Robots-Tag “noindex, nofollow”;
Now, let’s look at a various circumstance. Let’s say we want to utilize the X-Robots-Tag to obstruct image files, such as.jpg,. gif,. png, etc, from being indexed. You might do this with an X-Robots-Tag that would look like the below:
Please keep in mind that understanding how these regulations work and the effect they have on one another is essential.
For example, what happens if both the X-Robots-Tag and a meta robots tag lie when crawler bots find a URL?
If that URL is blocked from robots.txt, then specific indexing and serving regulations can not be discovered and will not be followed.
If instructions are to be followed, then the URLs containing those can not be disallowed from crawling.
Check For An X-Robots-Tag
There are a couple of different techniques that can be used to check for an X-Robots-Tag on the site.
The easiest way to examine is to set up a web browser extension that will inform you X-Robots-Tag info about the URL.
Screenshot of Robots Exemption Checker, December 2022
Another plugin you can use to identify whether an X-Robots-Tag is being used, for instance, is the Web Developer plugin.
By clicking on the plugin in your browser and browsing to “View Action Headers,” you can see the various HTTP headers being used.
Another approach that can be utilized for scaling in order to identify problems on sites with a million pages is Screaming Frog
. After running a site through Shouting Frog, you can browse to the “X-Robots-Tag” column.
This will reveal you which sections of the site are using the tag, along with which specific directives.
Screenshot of Yelling Frog Report. X-Robot-Tag, December 2022 Utilizing X-Robots-Tags On Your Site Comprehending and controlling how online search engine interact with your website is
the foundation of search engine optimization. And the X-Robots-Tag is an effective tool you can utilize to do simply that. Simply know: It’s not without its threats. It is extremely simple to slip up
and deindex your whole website. That stated, if you’re reading this piece, you’re probably not an SEO beginner.
So long as you utilize it wisely, take your time and inspect your work, you’ll find the X-Robots-Tag to be a helpful addition to your arsenal. More Resources: Included Image: Song_about_summer/ SMM Panel