Robots.txt Validator

Validate your robots.txt file. Check syntax, directives, sitemap reference, and potential crawl issues.

Syntax checkDirectivesSitemap ref~2 seconds
This tool analyzes the single URL you enter, not your entire website.
Analyzing...Running checks. This usually takes a few seconds.

What We Check

Every check comes with a pass/fail result and specific fix instructions.

Accessible

robots.txt must be at the root of your domain and return HTTP 200.

Content-Type

Should be served as text/plain. Other content types may be ignored by crawlers.

File Size

Google only processes the first 500KB of robots.txt. Keep it concise.

Has Useragent

At minimum, you need User-agent: * to address all crawlers.

No Blanket Block

Disallow: / blocks ALL crawlers from your entire site. Usually unintentional.

Has Sitemap

Including Sitemap: directive helps crawlers discover your sitemap automatically.

Valid Syntax

Only valid directives are recognized: User-agent, Disallow, Allow, Sitemap, Crawl-delay.

Allows Assets

Blocking CSS/JS prevents Google from rendering your page for mobile-first indexing.

Why It Matters

Numbers that make a difference for your website.

8

Checks

Comprehensive validation

500KB

Limit

Google's max file size

SEO

Critical

Wrong config = no indexing

Free

Always

No limits, no signup

Frequently Asked Questions

Common questions about this tool and how to use the results.

What happens if I don't have robots.txt?
Google will crawl everything, which is usually fine. But having one gives you control over crawl budget and which sections to exclude.
Can robots.txt prevent indexing?
No! robots.txt only prevents crawling. Pages can still appear in search results. Use noindex meta tag to prevent indexing.
Should I block /admin/ or /api/ paths?
Yes, block paths that should not appear in search results. But remember this only stops crawling, not indexing.

Ready to audit your site?

Enter your URL above and get results in seconds. Completely free.

Start Audit