Robots Meta Tag: Basics, Examples & SEO Best Practices




Updated 5/2/2024

The robots meta tag is an essential SEO tool and concept. It helps you manage how search engines and web crawlers treat your site.

robots meta tag featured image - robot reading instructions

What is a Meta Robots Tag?

The robots meta tag is an HTML tag that instructs search engines on how to index a page. It provides guidelines on what search engines should or shouldn’t do with the content.

For example, many websites keep members-only content out of Google’s index, so searchers can’t easily access it for free. Robots meta tags are a great tool for controlling how you want your site and its content displayed to searchers.

Example Tag:

<meta name="robots" content="noindex">

This is a standard tag that tells search engines like Google not to index the page and keep it out of search results.


Search engines like Google use web crawlers to index content. The process can be resource-intensive for both the search engine and the website. Giving specific instructions to these crawlers provides these benefits:

  1. Control Your Content: Decide what parts of your website get indexed and appear in search results. If you don’t want searchers to discover certain pages of your site, these types of tags can help.
  2. Optimize Resources: Prevent crawlers from accessing irrelevant or duplicate content, saving server resources. This improves your crawl budget, which can lead to greater site health and SEO.
  3. Manage User Experience: Ensure that users find the content they’re looking for when they search for related terms. If you’re noticing a bad search experience for your site when using Google, the robots meta tags can help improve this by removing irrelevant pages.

Knowing how to use robots meta tags and their effects on search engine results is crucial to a basic SEO skillset. Ensure you understand what actions you’re taking when getting started.

Example Uses

The tag can be implemented in various ways depending on your specific needs and the desired outcome. Here’s a closer look at some common implementations:

1. Prevent Indexing of a Page

If you want search engines to not index a particular page, you’d use:

<meta name="robots" content="noindex">

This directive ensures the page won’t appear in search engine results, useful for private or duplicate content.

2. Prevent Following of Links on a Page

To stop search engines from following the links on a certain page:

<meta name="robots" content="nofollow">

Use this when you have a page with links you don’t want to pass ranking power to, perhaps links to external sites you don’t endorse.

3. Combining Directives

You can combine multiple directives using a comma:

<meta name="robots" content="noindex, nofollow">

This tells search engines neither to index the page nor to follow the links on it. It’s useful for pages that you want to keep entirely private from search results and ensure no link equity passes to the linked pages.

4. No Archive Pages

If you don’t want search engines to store a cached copy of your page:

<meta name="robots" content="noarchive">

Useful for content that changes frequently or when you don’t want an older version of your page to be retrievable.

5. Using Specific Search Engine Tags

While the generic robots meta tag gives directives to all search engines, you can target specific ones, like Googlebot, for more precise control:

<meta name="googlebot" content="noindex">

This version tells only Google’s crawler not to index the page, while other search engines might still index it. This granularity offers greater flexibility in certain SEO strategies.

Best Practices for Robots Meta Tags

When utilizing the tag, it’s important to avoid common pitfalls that can hamper your site’s visibility in search engines. Here are a few tips to keep in mind:

1. Be Explicit in Your Directives

Always be clear with your instructions. For instance, if you want to prevent both indexing and link-following, specify both noindex and nofollow directives:

<meta name="robots" content="noindex, nofollow">

Being explicit helps prevent ambiguity and ensures search engines interpret your directives correctly.

2. Regularly Review and Update Your Tags

It’s not uncommon for pages’ purposes to change over time. A page that was once not intended for indexing may become critical for SEO later. This is very common when taking a page from staged to production, or an article from draft to published. Regularly review and update your tags to reflect the current status of your content.

3. Use with Caution on Important Pages

Ensure critical pages, such as product pages or cornerstone articles, are not accidentally blocked. A small oversight can result in these pages disappearing from search results, harming your site’s traffic and revenue.

4. Avoid Conflicting Directives

Ensure that your meta tag directives don’t conflict with directives in your robots.txt file. For instance, if a page is disallowed in robots.txt, search engines won’t see the meta tags on that page. Align the instructions between your robots meta tags and robots.txt for consistent messaging to search engines.

5. Be Mindful of Case Sensitivity

While most search engines treat meta tag directives as case-insensitive, it’s good practice to use lowercase to ensure uniformity and avoid potential discrepancies:

<!-- Recommended -->
<meta name="robots" content="noindex, nofollow">

<!-- Not Recommended -->
<meta name="ROBOTS" content="NOINDEX, NOFOLLOW">

Stick to lowercase for directives to ensure they’re universally recognized.

6. Use Tags for Specific Search Engines

While the general robots tag applies to all search engines, you might want to provide specific directives for specific search engines like Google or Bing. Use tags like googlebot or bingbot to target them:

<meta name="googlebot" content="noindex">

Customizing directives for specific search engines can provide more granular control over how different crawlers interpret your pages.

When handled correctly, the robots meta tag is a powerful tool for guiding search engines on how to treat your site’s content. Regularly revisiting and refining your usage ensures your website remains optimized for search visibility.


Navigating the intricacies of robots meta tags can sometimes be confusing. Here’s a rundown of frequently asked questions to guide you through the essentials.

1. What’s the difference between robots meta tags and robots.txt?

Robots meta tags and robots.txt both serve to instruct search engines on how to crawl or index content, but they operate differently:

  • Robots Meta Tag: This is placed in the <head> section of individual HTML pages and provides instructions specific to that page.
  • Robots.txt: This is a file placed in the root directory of a website. It provides broader instructions about which parts (like directories or entire sections) of a site crawlers can or cannot access.

While robots.txt restricts access to sections of your website, robots meta tags provide finer, page-level control.

2. Which is better, meta robot tags or robots.txt?

Neither is inherently better; they serve different purposes:

  • Robots.txt: Useful for preventing search engine bots from accessing large sections or types of content on your site.
  • Robots Meta Tag: Ideal for more granular control, like on a page-by-page basis.

Determine your needs. If you’re looking to block an entire directory, robots.txt is suitable. For specific page-level control, go with the robots meta tag.

3. What does a blocked robots meta tag mean?

A blocked robots meta tag means the page has a directive instructing search engines not to index that specific page, or not to follow the links on that page, or both. It’s saying, “Please ignore this page, search engine.”

If you discover a critical page has been accidentally blocked by a robots meta tag, it’s essential to fix to ensure it gets indexed.

4. Can I use the robots meta tag to block link equity?

Yes, by using the nofollow directive in the meta tag, you can instruct search engines not to follow the links on a page. This effectively cancels out any link equity that might be passed:

<meta name="robots" content="nofollow">

This is beneficial when you want to link to a page without passing SEO power to the linked page. Effective for when you need to link to pages for credibility, but you don’t want to support them with backlinks.

5. Does the robots meta tag affect my SEO?

Absolutely. Incorrect use can prevent important pages from being indexed or essential links from being followed, which can harm your site’s search performance.

Regularly review the robots meta tags on your website, especially after major updates, to ensure you’re not inadvertently blocking crucial content.

Understanding the nuances of robots meta tags can play a pivotal role in your site’s SEO performance. When used correctly, they offer precision control over how search engines interact with your content.

Bottom Line

Understanding and effectively utilizing the robots meta tag is crucial in modern web management. It offers you control over how search engines interact with your content, directly affecting visibility and user experience. By combining best practices with continuous learning, you can ensure your website remains optimized for both users and search engines.


Get Powerful Templates

Streamline your content management
with dynamic templates and tools.