What is agent readiness?
Agent readiness is how legible and usable a website, API, or MCP server is to autonomous AI agents. A human visitor reading a hero on a landing page is fine without structured data. An agent that hits the same URL needs explicit machine-readable signals: a sitemap it can crawl, an llms.txt at the root, OpenGraph plus JSON-LD on every page, predictable rate-limit headers, an OpenAPI spec for any public endpoint, and (increasingly) a discoverable MCP server card or AP2 manifest. Without those signals the agent guesses, fails halfway, or gives up. The Open Agent Readiness scanner here measures all of that against a public spec so you can fix the gaps before agents start showing up at your site.
The 7 readiness categories the scanner audits
Each scan groups checks into seven categories. A failing category drops the overall score; a green category contributes weighted points. The category breakdown lives in the public leaderboard and in the JSON returned by the v1 API.
1. Discoverability
Sitemap.xml, robots.txt, llms.txt, llms-full.txt, MCP server card, AP2 manifest, OpenAPI spec at a predictable path. If an agent can't find your endpoints by crawling, the rest of the categories don't matter.
2. Content
OpenGraph tags on every page, JSON-LD structured data (Organization, Product, FAQPage, BreadcrumbList), canonical URLs, and meta-description coverage. Determines whether your content surfaces in AI search (Perplexity, ChatGPT search, Claude search) and gets cited correctly.
3. Access control
Rate-limit headers (RateLimit-Limit, RateLimit-Remaining, Retry-After), proper 401/403 semantics, idempotency-key support on POST/PUT, CORS configured for cross-origin agent calls.
4. Capabilities
What the endpoint actually exposes. For MCP servers: number of tools, resources, prompts. For REST APIs: documented endpoints, schemas, example responses. For sites: structured product or pricing data.
5. Commerce (x402-mesh)
Whether the endpoint supports x402 micropayments for pay-per-call agent access, and whether prices are advertised in machine-readable form. The bar Coinbase and Stripe both endorsed.
6. Quality
Response-time consistency, schema stability across requests, deterministic error shapes, content-type honesty (returning JSON when the docs say JSON).
7. Trust
Identity verification (DNS records, signed manifests), terms-of-service URL discoverability, ai.txt or similar bot-policy files, and uptime track record from prior scans.
Why agent readiness matters in 2026
Agentic traffic is no longer hypothetical. ChatGPT search, Perplexity, Claude, Gemini Deep Research, Cursor, Devin, and a long tail of vertical agents all crawl the public web and call APIs as part of normal request handling. By Q1 2026 agentic traffic on B2B SaaS sites already accounts for 5-15% of all non-bot requests, and the curve is steeper than the human mobile transition was. Sites that ship readiness signals get cited in AI answers, picked up by agent task plans, and routed to in agent-driven discovery flows. Sites without them get politely skipped. The leaderboard above tracks who's already ahead.
How the scanner works
Paste a URL, hit scan. The scanner makes a sequence of HEAD and GET requests, parses HTML, fetches sitemaps, probes well-known paths (/llms.txt, /.well-known/mcp.json, /.well-known/ap2.json, /openapi.json), and runs spec-defined checks against each response. Non-2xx responses are categorised (skip vs fail) so a missing-but-optional file doesn't penalise you the same as a misconfigured-required one. Results render as the score ring above plus a paste-ready fix prompt for every failure that you can drop into Claude or Cursor and have it implemented in minutes. The same spec runs over MCP for AI agents that want to self-audit, and over REST at /api/v1/agent-readiness.
Common readiness failures and how to fix them
Missing llms.txt. The single most common gap. Add a plaintext file at the root listing your key URLs and a one-paragraph site description. Five minutes.
JSON-LD on the homepage only. Search engines already accept this; agents need it on product, pricing, docs, and individual content pages too. Generate from a shared helper, not hand-rolled.
Rate-limit headers missing on API responses. Even if your limits are generous, the lack of headers makes agents back off conservatively. Add RateLimit-Limit, RateLimit-Remaining, RateLimit-Reset to every authenticated route.
MCP server with no card. If you publish an MCP server, drop the JSON card at /.well-known/mcp.json with name, description, version, transport, tools array. Without it the server is invisible to discovery.
x402 endpoint without published pricing. Return the price in the 402 response body in the format the x402 spec defines. Otherwise paying agents can't budget the call.
Open Agent Readiness scanner vs Cloudflare's AI-bot management
Cloudflare's product is about blocking agent traffic; this scanner is about welcoming it. The two solve opposite problems. If you actively want to be discoverable to agents (most sites should, by 2026), you need the readiness audit; if you want to keep agents out of premium content, Cloudflare is the answer. They compose well: gate paid endpoints behind Cloudflare, make the rest of your surface readiness-compliant. Full breakdown on the dedicated comparison page.
Frequently asked questions
Is the scanner free?
Yes. Public scans are free with no signup required. The /api/v1/agent-readiness endpoint costs 1 credit per call (Free plan includes 1,500 credits per month, plenty for occasional checks). Paid agent access via x402 is also supported for autonomous agents with no API key.
Do you store the URLs I scan?
Public scans contribute anonymous results to the leaderboard. If you scan a URL we already have a profile for (e.g. a startup in our directory), the score updates the public profile. To opt out, use the API with the private=true flag.
What's the difference between agent readiness and SEO?
SEO optimises for human-mediated search engines (Google ranks pages, humans click, humans buy). Agent readiness optimises for agents that read, decide, and act without a human in the loop. Overlap exists (JSON-LD, sitemaps, OG tags help both) but agents care about API discoverability, rate-limit semantics, and machine-readable pricing in ways that traditional SEO doesn't.
Does my site need an MCP server to score well?
No. MCP is one capability category. A static marketing site with great content readiness, discoverability, and structured data scores highly without an MCP server. MCP becomes critical when you have an API or interactive product that agents would want to call.
How is the score weighted?
The seven categories aren't equally weighted. Discoverability and Content count more for content sites; Capabilities and Quality count more for API/MCP endpoints. The scanner detects what kind of surface it's scanning and adjusts. Full weighting in the open spec linked from the API docs.
Will Google or AI search engines see better-scored sites first?
Indirectly, yes. AI search engines (Perplexity, ChatGPT search, Claude search) read the same JSON-LD and OpenGraph tags the readiness scanner audits. A high readiness score correlates with citation rates in AI answers because the underlying signals are the same.
Can I run the scanner programmatically?
Yes. Three ways: REST (POST /api/v1/agent-readiness with a Bearer key), MCP (scan_agent_readiness tool over our public MCP server), and x402 (no API key, pay USDC per call on Base). All return the same JSON shape. See the docs for examples.
Does the scanner check x402 micropayment readiness?
Yes, x402 is the Commerce category. The scanner probes for a 402 response on a sample endpoint, validates the response body matches the x402 spec, and checks that the price is in machine-readable USDC. Sites that don't expose paid endpoints aren't penalised.
Where can I see how my competitors score?
The public leaderboard ranks the top sites and APIs by readiness score. You can filter by sector, country, or surface type (site, API, MCP). Each entry links to the full scan report.
Is this related to llms.txt or AP2?
Both, plus more. llms.txt is one Discoverability check. AP2 (Agentic Payments Protocol) manifest discovery is one Commerce check. The scanner aggregates the existing emerging standards (llms.txt, MCP, AP2, x402) into a single score so you don't have to track each spec separately.