Robots.Txt: stops search engine bots to crawl part of your blog


Using robots.txt file you can give instructions about their site to web robots.

While search engine bot comes to your blog they have limited resources to crawl. So after certain crawling they stop and you website may not indexed. Sometimes there may be some folder or pages you do not want to crawl. For example images folder. So using robots.txt you can tell search engine not to crawl that particular folder as this helps you to deep crawling your inner pages also.

Robots.txt file placed at the root of your domain.

For example:

User-agent: *
Disallow: /

I basically prevent crawling some folder of your domain but it does not prevent indexing.

– To stop crawling part of your blog use Robots.txt file.
– Do not use Robots.txt file for noindexing rather than No-index meta tag.

One thought on “Robots.Txt: stops search engine bots to crawl part of your blog”

Leave a Reply

Your email address will not be published. Required fields are marked *