Cloudflare's isitagentready.com scores your site across five categories: Discoverability, Content Accessibility, Bot Access Control, Protocol Discovery, and Commerce. Each one maps to a set of emerging standards that define how well AI agents can find, read, and interact with your platform. We covered the high-level picture in How to get your website agent-ready. This post goes deeper into what each standard is, where it comes from, and how mature it is today.
Discoverability
Agents need to find your content the same way search engines do, but with stricter expectations.
robots.txt is still the first file any crawler reads. Agents look for explicit AI bot rules here — not just legacy Googlebot directives. If your robots.txt doesn't mention AI-specific user agents, most bots will default to crawling everything — which means you've given up control without realizing it.
Sitemap — a properly maintained XML sitemap lets agents map your content structure in a single request. This is table stakes for SEO and equally important for agents doing structured content discovery.
Link headers — HTTP Link response headers expose relationships between resources (alternate formats, canonical URLs, pagination) without parsing HTML. Agents use these to navigate between related pages efficiently — particularly useful when combined with Markdown content negotiation.
None of these are new. But most sites have a robots.txt written for 2015, not for agents. Updating these three files is the highest-impact, lowest-effort starting point.
Content Accessibility
Agents perform better with clean, structured text than with full HTML documents full of nav bars, cookie banners, and tracking scripts.
Markdown content negotiation addresses this directly. When an agent sends a request with Accept: text/markdown, your server responds with a Markdown variant of the page instead of full HTML. Less noise, fewer tokens, faster processing.
The implementation can be as simple as a middleware that intercepts the Accept header and returns a pre-rendered .md version of the page — or as sophisticated as on-the-fly HTML-to-Markdown conversion. Monogram sites ship with Markdown-ready routes by default. You can see it in action by appending .md to any URL on monogram.io.
Of all the standards on this list, Markdown negotiation has the most immediate practical value. Agents that can get clean Markdown from your site use fewer tokens processing your content, which makes them more likely to include your information in their responses. Same dynamic as fast-loading pages ranking better in search.
Bot Access Control
Beyond discoverability, there's the question of control — defining how agents interact with your content after they've found it.
AI bot rules — directives in robots.txt that go beyond crawl permissions to specify usage intent. You might allow an agent to crawl your docs but block it from using that content for training. The robots.txt spec doesn't natively support this distinction well, which is why the next two standards exist.
Content Signals — a Cloudflare proposal that adds a structured layer on top of robots.txt. Instead of binary allow/disallow, you can express preferences for how content is used after access: allow search indexing but block AI training, permit grounding but not fine-tuning, allow summarization but not verbatim reproduction. It's a more nuanced permission model that reflects how content is actually used today.
Web Bot Auth — cryptographic identity for bots. Instead of trusting a user-agent string (which anyone can spoof), agents prove who they are by signing HTTP requests with Ed25519 keys. The verification keys are published at /.well-known/http-message-signatures-directory. This lets you make access decisions based on verified identity rather than self-reported names.
Content Signals and Web Bot Auth are both Cloudflare proposals still gaining traction outside their ecosystem. Adoption beyond Cloudflare-powered sites is early. Worth implementing if you're on Cloudflare, worth watching if you're not.
Protocol Discovery
These standards go further than content access — they let agents discover and use your services programmatically.
MCP Server Cards — a JSON document hosted at /.well-known/mcp/server-card.json that declares your MCP server's capabilities. It tells agents what tools you offer, what resources are available, how to authenticate, and which transport protocol to use — all before establishing a full session.
The spec is being drafted as SEP-2127 by Anthropic and GitHub. A Server Card looks roughly like this:
{
"serverInfo": {
"name": "your-service",
"version": "1.0.0",
"description": "What your service does"
},
"transport": {
"type": "streamable-http",
"url": "https://<your-domain>.com/mcp"
},
"capabilities": {
"tools": ["search", "checkout", "get-inventory"],
"resources": ["products", "categories"]
}
}Agents shouldn't need to initialize a full MCP session just to find out if a server has what they're looking for. The spec is still in draft and reference SDK implementations are on the roadmap, but the pattern is minimal enough that early adoption carries little risk.
Agent Skills — a file-based standard for packaging domain expertise. Sites expose an index at /.well-known/agent-skills/index.json that lists available capabilities, each defined as a YAML frontmatter header plus Markdown instructions. Agents can load these on demand based on the task they're working on.
WebMCP — a W3C Community Group draft led by engineers from Microsoft and Google that brings MCP to the browser. Instead of requiring a backend server, web pages declare tools directly via navigator.modelContext — agents running in the browser can discover and invoke JavaScript functions on your page. Currently shipping in Chrome Canary, this bridges the gap between server-side MCP and client-side agent interactions.
API Catalog — standard API documentation (OpenAPI/Swagger) that agents can parse to understand your available endpoints. If you already have an OpenAPI spec, you're most of the way there.
OAuth discovery — standard OAuth 2.0 metadata at /.well-known/oauth-authorization-server so agents know how to authenticate programmatically. This matters more as agents move from reading public content to performing authenticated actions on behalf of users.
Most of these are in draft or early adoption. That's exactly when it pays to start implementing — before the ecosystem matures and the early movers have already captured agent traffic.
Commerce
These protocols enable AI agents to discover products, initiate checkout, and complete purchases autonomously.
Agentic Commerce Protocol (ACP) — developed by Stripe and OpenAI as an open standard under Apache 2.0. It defines how agents interact with merchant checkouts: discovering products, adding items to a cart, and completing payment — all through structured API calls. ChatGPT is the first agent platform implementing it, Stripe the first payment processor. Hard to find stronger backing than that.
x402 — revives the HTTP 402 Payment Required status code for machine-native micropayments. An agent hits your endpoint, gets a 402 response with payment instructions, pays (currently via stablecoins), and retries — all within a single HTTP request-response cycle. The protocol design is clean, but the dependency on crypto payment rails limits near-term adoption.
UCP (Universal Commerce Protocol) — co-developed by Google, Shopify, and Etsy, with endorsements from Visa, Mastercard, PayPal, and dozens of major retailers. UCP defines building blocks for the full commerce lifecycle — discovery, checkout, identity linking, and order management — and is designed to work alongside MCP and A2A.
Agentic commerce is the furthest out in terms of mainstream readiness, but Stripe and OpenAI backing ACP signals where this is headed. For ecommerce clients, we're already evaluating ACP integration paths.
Where This Leaves You
Most sites score poorly on the scanner today. The highest-impact changes are also the simplest:
- Update your
robots.txtwith explicit AI bot rules - Serve Markdown variants of your key pages
- Make sure your sitemap is current and your
Linkheaders are present - Add a Server Card if you expose any API or service
- Run the scan at isitagentready.com and use the generated instructions to close the gaps
The scan generates implementation instructions you can paste directly into tools like Cursor or Claude Code — turning audit results into code changes in minutes.
At Monogram, we've integrated the production-ready standards into our delivery process and we're tracking the rest as they mature.

