🤖 robots.txt Generator

Build a robots.txt file visually. Configure allow/disallow rules per user-agent, set crawl delay, and add sitemap URL. Free online robots.txt generator for SEO.

Presets:

How to Use

1

Choose a preset

Start with a preset: Allow All, Block All, Block AI Bots (GPTBot, ClaudeBot, etc.), or SEO-Friendly.

2

Customize the rules

Add or remove User-agent rules and Disallow/Allow paths using the rule builder below the presets.

3

Download your file

Add your sitemap URL (optional), then click Download to save your robots.txt file.

Frequently Asked Questions

What is a robots.txt file? +
robots.txt is a file placed at the root of your website (example.com/robots.txt) that tells web crawlers which pages they can or cannot request. It follows the Robots Exclusion Protocol and is the first thing most crawlers fetch.
Does robots.txt prevent pages from being indexed? +
No — robots.txt prevents crawling, not indexing. If other pages link to a disallowed URL, Google can still index it without crawling it. To prevent indexing, use the noindex meta tag or X-Robots-Tag header.
How do I block AI training bots? +
Use the "Block AI Bots" preset. Common AI crawlers include: GPTBot (OpenAI), CCBot (Common Crawl), Google-Extended (Google AI), anthropic-ai (Anthropic), and ChatGPT-User. Add User-agent: BotName / Disallow: / for each.
What does "Disallow: /" mean? +
Disallow: / blocks all pages on the site for that user-agent. Disallow: /admin/ blocks only the /admin/ directory. Disallow: (empty) means allow everything. Allow: /public/ within a blocked section creates an exception.
Is robots.txt case-sensitive? +
Paths in robots.txt are case-sensitive on case-sensitive servers (most Linux servers). User-agent names are case-insensitive. So Disallow: /Admin/ and Disallow: /admin/ may be different paths.


संपूर्ण गाइड: Robots.txt Generator

Robots.txt Generator क्या है?

Robots.txt Generator website के लिए proper robots.txt file create करता है जो search engine crawlers को guide करती है। robots.txt Robots Exclusion Protocol implement करता है — specify करता है कि कौन से pages या directories crawl करनी चाहिए और कौन सी नहीं।

Admin panels, private user data, duplicate content, staging URLs, और API endpoints को crawling से block करना important है। यह privacy और crawl budget दोनों के लिए beneficial है।

कैसे उपयोग करें

  1. User-agents select करें (Googlebot, Bingbot, all)।
  2. Allowed और disallowed paths specify करें।
  3. Sitemap URL add करें।
  4. Generated file copy करें website root पर upload करने के लिए।

Pro Tips

🧰 50+ Tools