Robots. text file means it’s the process to give a permission to Google but to decide which pages crawling your website and which pages not.

Now one question is coming in your mind is that how the process is Going on.

Step.1 If you want to crawl all pages of your website using below following code.

User-agent: *


Step.2 If your website is not updated or some part is under construction than you must disallow to all the pages of your website to using below code. This code uses Google not crawl your website pages.

User-agent: *
Disallow: /

Step.3 If you want to some coding part or folder disallow form Google them using below code.

User-agent: *
Disallow: /cgi-bin/
Disallow: /tmp/
Disallow: /junk/
If you have any query in your mind regarding this please let me know.

About The Massive Technolab

Who We Are

Massive Technolab is a Leading IT services Company in India that expertise in web Development, Web Design and SEO at affordable rates.

Leave a Comment

comments powered by Disqus