All the ways to exclude a page in Google

Both meta tags and X-Robots-Tag are crucial in determining how search engines interact with the content on a Web site. At first glance, both tools seem similar, but there is a difference in application, flexibility and scope.

Understanding the differences between the two tools contributes to a more effective SEO strategy, especially in terms of managing a website’s visibility in search engines.

Excluding parts of a website

There are several options for blocking specific areas of a Web site from search engines. Which option to use depends on the specific needs and nature of the content you want to hide.

Robots.txt file

The robots.txt file is a file that you place in the root directory of a website. It instructs search engines on which parts of a Web site they may or may not crawl.

While it is a powerful tool, it also has its limitations. The tool does not guarantee that excluded content will not be indexed. Exclusion is a request rather than a prohibition to show the excluded content. Robots.txt file is especially useful for excluding large sections of a Web site or certain file types.

Robots meta tag

The robots meta tag gives more granular control at the page level. Place the tag in the section of the HTML and specifically indicate whether a page should be indexed or tracked. This tool is especially useful for pages with temporary promotions or internal search results that should not appear in search engine results.

X-Robots-Tag HTTP header

X-Robots-Tag HTTP header is similar to the robots meta tag, but at the server level. This tag cannot be applied to non-HTML files such as PDFs or images. The tools are especially useful in the case of technical control of the server and instructions that go beyond what is within the capabilities of HTML.

Request removal through the Google Search Console

In case pages need to be removed from Google’s index faster than a normal crawl process would allow, it is possible to submit a removal request through Google’s Search Console. This causes the page to immediately disappear from search results. However, it does not replace the need for a permanent method like a noindex tag.

Practical guide to using noindex

Noindex is an important part of the SEO roadmap, if used carefully and strategically.

The impact of noindex on the visibility of a page

The noindex tag explicitly tells search engines not to include a page in the index. Thus, the page will not appear in the search results. Certain pages – think temporary content, duplicate pages or private content – are not displayed.

However, noindex tags do not prevent page crawling or links on pages from being tracked. This is only possible when using “nofollow.”

The implementation of noindex

  1. Choose the right pages: identify which pages you don’t want in search results, think duplicate pages, private pages or pages with temporary or thin content.
  2. Add the noindex tag: put the <meta name=”robots” content=”noindex”> tag in the <head> section of the HTML of the pages in question.
  3. Verify implementation: use tools like Google’s Search Console on to verify that the tag has been implemented correctly and that search engines recognize it.
  4. Monitor impact: monitor the index status of pages. Sometimes it takes a while for the search engines to respond to the noindex tag, so monitor regularly to constantly get a good picture.
  5. Update as needed: remove the noindex if a page should be visible again.

Deleting pages using Google’s URL Removal Tool

In some cases, excluding page search engines is not enough. For example, when sensitive information needs to be removed and a page that has been accidentally indexed needs to be removed from search results as quickly as possible. In this case, use Google’s URL Removal Tool. This tool can temporarily remove URLs from Google’s search results.

Please note that this is only a temporary solution. Noindex tags are still necessary for permanent removal or to remove content from a site.

URL Removal Tool for quick action

The URL Removal Tool is ideal for quick action. Use the tool through the Google Search Console. To do this, enter the URL to be extracted from the search results. This removal takes about six months.

After those six months, the page may reappear in search results. Avoid this by placing a noindex tag or permanently deleting a page, for example.

Long-term or permanent removal

For long-term or permanent removal of a page from search results, the URL Removal Tool is not enough. Remove the content yourself or add a noindex tag.

Make sure the server returns a status code 404 (not found) or 410 (permanently deleted). These codes tell search engines that the page no longer exists and may be removed from the index over time.

All possibilities at a glance

The table below sets out different options for both Meta tags and X-Robots-Tag.

PossibilityMeta TagsX-Robots-Tag
LocationIn the <head> section of an HTML page.In the HTTP response header, server-side.
ScopeOnly on the specific page where they are posted.On any type of HTTP response, including non-HTML files.
FlexibilityMust be manually added to each page.More flexible, can be applied server-wide.
Use for HTML pagesInstructions for indexing and tracking links.Same capabilities as meta tags, but server-side.
For other files, useNot applicable.Can be used for images, PDFs and other media.
Complexity of instructionsLimited to basic instructions per page.Ability to handle more complex instructions and conditions.
Example<meta name="robots" content="noindex, nofollow">Header set X-Robots-Tag "noindex, noarchive, nosnippet"
Options to exclude a page.

This table shows that the X-Robots Tag has more flexibility and more extensive application possibilities, especially for non-HTML content and more complex scenarios.

Common mistakes

Avoid common mistakes when excluding pages from indexing. Using robots.txt, X-Robots-Tag and Meta tags incorrectly can produce negative results. Thus, pages can still affect SEO position in search results in a negative way.

Pitfalls Robots.txt

It is often assumed that blocking a page in robots.txt means that a page will not be indexed. This is a common mistake. Robots.txt prevents the search engines from crawling a page’s content, but the page can still appear in the index if it is linked elsewhere.

Indexing can be prevented by using noindex in a robots meta tag or X-Robots-Tag.

Misunderstandings Meta-tags and X-Robots

There are also many recurring misunderstandings in the use of mete tags and the X-Robots tag. It is important to understand that these tags provide instructions to search engines on indexing and link following.

In the case of misconfiguration, unwanted indexing can occur or pages that should be indexed are excluded. Always test the implementation beforehand to avoid SEO problems.

The differences between Meta-tags and X-Robot-Tag

Meta tags and X-Robot-Tag are both used to give instructions to search engines on how they should treat certain content on a Web site. The functions are similar, but the tools differ in their application and flexibility.

  1. Meta tags:
    • Location: Meta tags are placed directly in the HTML of an individual Web page, usually in the <head> section.
    • Scope: Meta tags apply only to the specific page on which they are placed.
    • Flexibility: Meta tags limited flexibility because they must be manually applied to all desired pages.
    • Use: Meta tags indicate, among other things, how search engines should index a page (e.g., with noindex, nofollow).
    • Example: <meta name=”robots” content=”noindex, nofollow”>
  2. X-Robots-Tag:
    • Location: X-Robots-Tag is an HTTP header and is sent in the server’s HTTP response.
    • Scope: X-Robots-Tag can be applied to any type of HTTP response. This applies not only to HTML pages, but also to media such as images or PDF files.
    • Flexibility: X-Robots-Tag is more flexible and powerful than meta tags, especially when it comes to managing crawl instructions for non-HTML files.
    • Usage: more complex instructions are used by using X-Robots-Tag – think about combining different guidelines for different search engines or applying rules based on certain criteria.
    • Example: in a server configuration, it is possible to add a rule as Header set X-Robots-Tag “noindex, noarchive, nosnippet”.

Meta tags are thus limited to providing instructions to search engines at the page level within the HTML code, while the X-Robot Tag provides a more versatile and powerful way to manage crawl instructions. This way is applicable to a wide range of content types and through server configurations.

Align exclusion strategies with SEO goals

When determining exclusion strategies in SEO goals, it is important to know what the website needs to achieve. Consider which parts help improve SEO and which do not. Exclusion strategies aim not only to hide content but also to help search engines focus on the content that really matters. So think strategically about using tools such as robots.txt, X-Robots-Tag and noindex tags.

Excluding content that does not contribute to better SEO – think duplicate pages or internal search results – can contribute to more relevant and visible content of higher quality.

The balance between visibility and privacy

While visibility is essential to attract traffic, not all content is intended for public display. For privacy reasons, it may be necessary to hide some parts of a Web site, including user-specific information or internal data.

What is important is finding a good balance. Content must be valuable for indexing and to rank higher in Google, but sensitive information must also be shielded. Make sure both parts are well met.


Meta-tags and X-Robots-Tag are both essential for managing how search engines treat a Web site’s content. However, there are differences.

Meta tags are especially suited to apply basic instructions to individual HTML pages, while the X-Robots tag provides a more flexible and powerful solution for a wider range of content types and more complex scenarios. This helps guide a website’s visibility and indexing more accurately and provides a more targeted SEO roadmap.

Senior SEO-specialist

Ralf van Veen

Senior SEO-specialist
Five stars
My clients give me a 5.0 on Google out of 75 reviews

I have been working for 10 years as an independent SEO specialist for companies (in the Netherlands and abroad) that want to rank higher in Google in a sustainable manner. During this period I have consulted A-brands, set up large-scale international SEO campaigns and coached global development teams in the field of search engine optimization.

With this broad experience within SEO, I have developed the SEO course and helped hundreds of companies with improved findability in Google in a sustainable and transparent way. For this you can consult my portfolio, references and collaborations.

This article was originally published on 22 March 2024. The last update of this article was on 22 March 2024. The content of this page was written and approved by Ralf van Veen. Learn more about the creation of my articles in my editorial guidelines.