Robots.txt + Sitemap Validator

Validate your robots.txt and sitemap.xml by URL or paste. Parses user-agent groups, Disallow/Allow rules, and sitemap URL counts.

Requests proxied to bypass browser CORS restrictions; your input is not stored.

Fetches /robots.txt from the given origin via a server-side proxy.

Common Use Cases

  1. Validate robots.txt before deployingPaste your site URL or raw robots.txt to check for disallowed crawl rules that might accidentally block search engines from indexing important pages.
  2. Audit a competitor's crawl rulesEnter any public domain to fetch and parse its robots.txt, revealing which directories are blocked and which sitemaps are declared.
  3. Verify sitemap URL countFetch a sitemap.xml to confirm the number of URLs declared and check for malformed entries before submitting to Google Search Console.

Pro plan — coming soon

Save your history, create reusable presets, and share outputs with a link. One plan, all tools.

See what's planned →

Frequently Asked Questions

Related Tools