Agent Readiness Score

Agents are coming. Are you ready?

Free. No signup. Results in about a minute.

0
Content

Can agents find, read, and access your content?

0
Interaction

Can agents take actions and navigate your site?

0
Signup Test

Can an AI agent create an account on your site?

0
WebMCP (Beta)

Does your site expose tools for AI agents?

0–4950–8990–100

How we score your site

1. Fetch your page

We request your URL from our server, just like an AI agent would.

2. Analyze content

We check llms.txt, robots.txt, structured data, token efficiency, and permissions.

3. Test interaction & signup

We render your page in a browser, test forms and navigation, and attempt agent signup — all in parallel.

4. Detect WebMCP tools & score

We detect registered tools and schemas, then weight each check to produce your readiness score.

What we check — and why it matters

AI agents interact with websites differently than humans. They read through accessibility trees, parse structured data, and consume entire pages as tokens. Here's every signal we evaluate.

Content Readiness

Can AI agents find, read, and understand your content? These checks measure how efficiently agents consume your pages — from discovery signals to token efficiency.

Discovery

llms.txt
llms.txt gives AI agents a structured summary of your site — what it does, what APIs exist, and where to find key content. Without it, agents must reverse-engineer your site from raw HTML.llmstxt.org spec
Structured Data (JSON-LD)
JSON-LD structured data tells agents exactly what entities exist on your page — products, organizations, articles — and their properties. This eliminates guesswork when agents extract information.Google structured data guide
Sitemap
A sitemap lets AI crawlers discover all your pages without following every link. This is critical for agents that need to index or search your full site.Sitemap protocol
Meta Descriptions & OpenGraph
Meta descriptions and OpenGraph tags let agents summarize your page without reading the full HTML. Agents use these to decide whether a page is relevant before committing tokens to parse it.
Heading Hierarchy
Agents fold and navigate content by heading level. A clear h1 → h2 → h3 hierarchy lets them skip to relevant sections instead of reading the entire page sequentially.
Canonical URL
A canonical URL tells agents which version of a page is authoritative. Without it, agents may waste tokens on duplicate content or cite the wrong URL.

Readability

Markdown Content Negotiation
When a server responds to Accept: text/markdown with clean markdown, agents get your content at ~80% fewer tokens than raw HTML. This is the single biggest efficiency win for AI consumption.Cloudflare Markdown for Agents
Token Efficiency
This measures how much of your HTML is actual content vs. framework noise — CSS classes, nested divs, scripts. Low ratios mean agents burn most of their context window on markup instead of your content.
Content Extraction Quality
Agents extract your main content by looking for <main> or <article> elements. Without these semantic wrappers, they pull in navigation, footers, and sidebars — adding noise and wasting tokens.
Semantic HTML
Semantic elements like <main>, <nav>, <article>, and <section> give agents a structural map of your page. Generic <div> containers provide no hints about what content they hold.
Page Token Footprint
Every page an agent reads consumes context window tokens. Lighter pages leave more room for conversation history and multi-step workflows. Pages over 30k tokens can exhaust smaller model context windows entirely.

Permissions

AI Crawler Policy (robots.txt)
robots.txt controls which crawlers can access your site. Blocking AI crawlers like GPTBot and ClaudeBot prevents your content from appearing in AI answers and agent workflows.
Content-Signal Header
The Content-Signal header explicitly declares whether your content can be used for AI training, search, and input. Without it, agents must guess your permissions or apply conservative defaults.Content Signals spec

Interaction Readiness

Can AI agents take actions on your site? These checks evaluate whether agents can navigate, fill forms, and interact with your pages using the accessibility tree.

Interactability

Labeled Form Inputs
Form inputs without labels are invisible to AI agents. Agents rely on label text and aria-label to understand what data each field expects and fill forms correctly.
Descriptive Button & Link Text
Buttons labeled "Click here" or "Learn more" don't tell an agent what the action does. Descriptive text like "Download pricing guide" lets agents choose the right action confidently.
API Documentation
Exposed API docs (OpenAPI, Swagger, GraphQL) let agents interact with your service programmatically instead of navigating a UI. This is the most reliable path for agent automation.

Accessibility for Agents

ARIA Labels on Interactive Elements
AI agents read pages through the accessibility tree, just like screen readers. Elements without accessible names are invisible or ambiguous to agents.
Landmark Roles
Landmark roles — <main>, <nav>, <header>, <footer> — let agents jump directly to relevant page sections. Without landmarks, an agent must scan every element sequentially.
Keyboard-Reachable Elements
Agents interact with pages through keyboard-like actions: tab, enter, arrow keys. Interactive elements that aren't keyboard-reachable are effectively unusable by agents.
Consistent Navigation
A named <nav> element with a clear aria-label helps agents parse your site's navigation. This lets them discover pages and understand your information architecture.

WebMCP Readiness

Does your site expose tools for AI agents to call directly? WebMCP lets pages register functions — like "add to cart" or "search" — that agents invoke without navigating a UI.

Tool Integration

WebMCP Meta Tag
A WebMCP meta tag tells visiting agents that your page exposes tools they can call. Without this signal, agents won't know to look for WebMCP capabilities.
WebMCP API in Scripts
References to navigator.modelContext in your scripts indicate your page implements the WebMCP API. This is the runtime interface agents use to discover and call your tools.
Registered Tools
Registered WebMCP tools are functions your page exposes for agents to call directly — like "add to cart", "search", or "filter results". No tools means agents can only read, not act.
Tool Descriptions & Schemas
Agents choose which tools to call based on descriptions and input schemas. Missing or vague descriptions lead agents to call the wrong tool or pass bad parameters.
Page State via provideContext()
provideContext() shares your page's current state — selected filters, user info, cart contents — with agents. Without it, agents must infer state from the DOM, which is error-prone.

Agent Signup Readiness

Can an AI agent create an account on your site? We test this by running a real browser-based agent through your signup flow — finding the form, filling fields, and clicking submit.

Signup Flow

Signup Page Discoverable
If an AI agent can't find your signup page from the homepage, it can't onboard users on your behalf. Clear "Sign up" links in navigation or hero sections are essential.
Signup Form Parseable
Standard HTML form elements — <form>, <input>, <label> — are what agents know how to interact with. Custom JavaScript widgets or non-standard inputs can block agent form parsing.
Fields Identifiable
Agents identify form fields by their labels and autocomplete attributes. Fields without labels force agents to guess — often incorrectly — what data to enter.
No CAPTCHA Blocking
AI agents cannot solve CAPTCHAs. If your signup form requires one, agents are completely blocked. Consider risk-based challenges that only trigger for suspicious behavior.
Submission Succeeds
This tests whether an AI agent can complete your signup flow end-to-end. Failures here mean agents cannot create accounts, which blocks any downstream automation.
Clear Outcome Signal
After submitting a form, agents need a clear signal of what happened — a success message, redirect, or specific error. Ambiguous outcomes leave agents unable to determine their next step.