Agent Readiness Score

Agents are coming. Are you ready?

Free. No signup. Results in about a minute.

0
Openclaw Experience

What happened when a real AI agent tried to use your site?

0
Browserbase Test

Can an AI agent create an account on your site?

0
Rules

Does your site follow best practices for AI agent access and interaction?

0
WebMCP (Beta) *

Does your site expose tools for AI agents?

0–4950–8990–100

* Not included in overall score

How do you stack up?

How we score your site

1. Fetch your page

We request your URL from our server, just like an AI agent would.

2. Analyze content

We check llms.txt, robots.txt, structured data, token efficiency, and permissions.

3. Test interaction & signup

We render your page in a browser, test forms and navigation, and attempt agent signup — all in parallel.

4. Detect WebMCP tools & score

We detect registered tools and schemas, then weight each check to produce your readiness score.

Run it yourself

The agent test that powers part of this score is an open-source OpenClaw skill. Install it and run it locally against any site — no account needed.

openclaw skill install pillarhq/openclaw-agent-score

Then ask your agent: "Run agent-score on https://your-site.com"

View on GitHub

What we check — and why it matters

AI agents interact with websites differently than humans. They read through accessibility trees, parse structured data, and consume entire pages as tokens. Here's every signal we evaluate.

End-to-End Agent Test

Navigation & Exploration
The agent explores your site freely — clicking links, reading pages, and building a mental model of your information architecture. Sites with clear navigation and semantic structure score higher.
Task Completion
The agent attempts real tasks: signing up, searching, filtering, or completing flows. Each successful task demonstrates that your site works for AI-driven automation.
Error Handling & Feedback
When the agent hits errors or dead ends, how your site responds matters. Clear error messages and recovery paths help agents self-correct — vague failures leave them stuck.
Overall Agent Experience Score
After exploring your site, the agent self-scores its overall experience. This reflects a holistic assessment of how well your site works for AI agents, not just individual checks.OpenClaw

Signup Flow

Signup Page Discoverable
If an AI agent can't find your signup page from the homepage, it can't onboard users on your behalf. Clear "Sign up" links in navigation or hero sections are essential.
Signup Form Parseable
Standard HTML form elements — <form>, <input>, <label> — are what agents know how to interact with. Custom JavaScript widgets or non-standard inputs can block agent form parsing.
Fields Identifiable
Agents identify form fields by their labels and autocomplete attributes. Fields without labels force agents to guess — often incorrectly — what data to enter.
No CAPTCHA Blocking
AI agents cannot solve CAPTCHAs. If your signup form requires one, agents are completely blocked. Consider risk-based challenges that only trigger for suspicious behavior.
Submission Succeeds
This tests whether an AI agent can complete your signup flow end-to-end. Failures here mean agents cannot create accounts, which blocks any downstream automation.
Clear Outcome Signal
After submitting a form, agents need a clear signal of what happened — a success message, redirect, or specific error. Ambiguous outcomes leave agents unable to determine their next step.

Discovery

llms.txt
llms.txt gives AI agents a structured summary of your site — what it does, what APIs exist, and where to find key content. Without it, agents must reverse-engineer your site from raw HTML.llmstxt.org spec
Structured Data (JSON-LD)
JSON-LD structured data tells agents exactly what entities exist on your page — products, organizations, articles — and their properties. This eliminates guesswork when agents extract information.Google structured data guide
Sitemap
A sitemap lets AI crawlers discover all your pages without following every link. This is critical for agents that need to index or search your full site.Sitemap protocol
Meta Descriptions & OpenGraph
Meta descriptions and OpenGraph tags let agents summarize your page without reading the full HTML. Agents use these to decide whether a page is relevant before committing tokens to parse it.
Heading Hierarchy
Agents fold and navigate content by heading level. A clear h1 → h2 → h3 hierarchy lets them skip to relevant sections instead of reading the entire page sequentially.
Canonical URL
A canonical URL tells agents which version of a page is authoritative. Without it, agents may waste tokens on duplicate content or cite the wrong URL.

Readability

Markdown Content Negotiation
When a server responds to Accept: text/markdown with clean markdown, agents get your content at ~80% fewer tokens than raw HTML. This is the single biggest efficiency win for AI consumption.Cloudflare Markdown for Agents
Token Efficiency
This measures how much of your HTML is actual content vs. framework noise — CSS classes, nested divs, scripts. Low ratios mean agents burn most of their context window on markup instead of your content.
Content Extraction Quality
Agents extract your main content by looking for <main> or <article> elements. Without these semantic wrappers, they pull in navigation, footers, and sidebars — adding noise and wasting tokens.
Semantic HTML
Semantic elements like <main>, <nav>, <article>, and <section> give agents a structural map of your page. Generic <div> containers provide no hints about what content they hold.
Page Token Footprint
Every page an agent reads consumes context window tokens. Lighter pages leave more room for conversation history and multi-step workflows. Pages over 30k tokens can exhaust smaller model context windows entirely.

Permissions

AI Crawler Policy (robots.txt)
robots.txt controls which crawlers can access your site. Blocking AI crawlers like GPTBot and ClaudeBot prevents your content from appearing in AI answers and agent workflows.
Content-Signal Header
The Content-Signal header explicitly declares whether your content can be used for AI training, search, and input. Without it, agents must guess your permissions or apply conservative defaults.Content Signals spec

Interactability

Labeled Form Inputs
Form inputs without labels are invisible to AI agents. Agents rely on label text and aria-label to understand what data each field expects and fill forms correctly.
Descriptive Button & Link Text
Buttons labeled "Click here" or "Learn more" don't tell an agent what the action does. Descriptive text like "Download pricing guide" lets agents choose the right action confidently.
API Documentation
Exposed API docs (OpenAPI, Swagger, GraphQL) let agents interact with your service programmatically instead of navigating a UI. This is the most reliable path for agent automation.

Accessibility for Agents

ARIA Labels on Interactive Elements
AI agents read pages through the accessibility tree, just like screen readers. Elements without accessible names are invisible or ambiguous to agents.
Landmark Roles
Landmark roles — <main>, <nav>, <header>, <footer> — let agents jump directly to relevant page sections. Without landmarks, an agent must scan every element sequentially.
Keyboard-Reachable Elements
Agents interact with pages through keyboard-like actions: tab, enter, arrow keys. Interactive elements that aren't keyboard-reachable are effectively unusable by agents.
Consistent Navigation
A named <nav> element with a clear aria-label helps agents parse your site's navigation. This lets them discover pages and understand your information architecture.

Tool Integration

WebMCP Meta Tag
A WebMCP meta tag tells visiting agents that your page exposes tools they can call. Without this signal, agents won't know to look for WebMCP capabilities.
WebMCP API in Scripts
References to navigator.modelContext in your scripts indicate your page implements the WebMCP API. This is the runtime interface agents use to discover and call your tools.
Registered Tools
Registered WebMCP tools are functions your page exposes for agents to call directly — like "add to cart", "search", or "filter results". No tools means agents can only read, not act.
Tool Descriptions & Schemas
Agents choose which tools to call based on descriptions and input schemas. Missing or vague descriptions lead agents to call the wrong tool or pass bad parameters.
Page State via provideContext()
provideContext() shares your page's current state — selected filters, user info, cart contents — with agents. Without it, agents must infer state from the DOM, which is error-prone.