The open specification for machine-readable AI trust certificates. Any website can implement this standard โ no registration required for the basic file.
The robots-trust.json standard provides a machine-readable way for websites to declare
their AI readiness, content access policies, and AI training permissions.
It is placed at a well-known URL and can be read by any AI agent, crawler, or automated system.
The standard is inspired by robots.txt (crawl permissions) and
sitemap.xml (page discovery), but designed specifically for the needs of
AI systems that need to understand content structure, trust level, and training permissions.
robots-trust.json without registering.
Registration with Robot Trust Hub adds verified status and listing in the global
registry.
The Robot Trust Protocol is open and decentralized.
Any website can self-publish a robots-trust.json file without registering
with any authority. The certificate_issuer field can be:
| Issuer value | Meaning | Status |
|---|---|---|
"self" |
Site self-attests its AI readiness | Valid ยท unverified |
"robot-trust.org" |
Verified by Robot Trust Hub | โ Verified |
"your-authority.org" |
Any other trusted issuer | Depends on issuer trust |
{
"robot_trust_version": "1.0",
"site_identity": { "domain": "example.com" },
"trust_status": {
"robot_access": "allowed",
"certificate_issuer": "self",
"certificate_status": "pending"
},
"ai_training": {
"training_allowed": true,
"commercial_training_allowed": false
}
}
This is a valid certificate AI agents can read and respect.
Register with Robot Trust Hub to upgrade "self" โ "robot-trust.org"
and get certificate_status: "verified" with public registry listing.
The file must be served at the following URL:
https://yourdomain.com/.well-known/robots-trust.json
The /.well-known/ path follows
RFC 8615
โ the standard for well-known URIs. The file must be served over HTTPS and return
Content-Type: application/json.
| File | Path | Purpose |
|---|---|---|
robots-trust.json |
/.well-known/robots-trust.json |
Trust certificate Required |
ai/index.json |
/ai/index.json |
Structured robot-readable content Optional |
llms.txt |
/llms.txt |
Plain text site description for LLMs Optional |
| Field | Type | Description |
|---|---|---|
robot_trust_version Required |
string | Spec version. Must be "1.0" |
site_identity Required |
object | Site identification details |
trust_status Required |
object | Certificate and trust level |
ai_readability Optional |
object | Machine readability capabilities |
access_points Optional |
object | AI-accessible endpoints |
ai_training Optional |
object | AI training permissions |
content_policy Optional |
object | Content access policies |
capabilities Optional |
object | Supported AI interaction modes |
update Optional |
object | Issue and expiry dates |
| Field | Type | Description |
|---|---|---|
site_name |
string | Human-readable site name |
domain |
string | Registered domain (e.g. example.com) |
owner_type |
string | company | individual | robot-operator |
ai-platform |
purpose |
string | Plain text description of the site's purpose |
| Field | Type | Description |
|---|---|---|
robot_access |
string | "allowed" | "restricted" | "denied" |
certificate_issuer |
string | Should be "robot-trust.org" for certified sites |
certificate_status |
string | "verified" | "pending" | "revoked" |
trust_level |
string | "basic" | "pro" | "enterprise" | "authority"
|
issued |
string | ISO 8601 date of certificate issuance |
expires |
string | ISO 8601 date of certificate expiry (max 1 year) |
| Field | Type | Description |
|---|---|---|
preferred_entry |
string | Preferred URL path for AI agents (e.g. "/ai/") |
robot_safe_endpoints |
array | List of URL paths safe for AI access |
rate_limit_policy |
string | "ai-friendly" | "standard" | "strict" |
The ai_training section lets website owners explicitly declare whether their content
may be used to train AI models โ including commercial models like LLMs.
This is emerging as a critical signal for AI companies that need to respect content rights.
| Field | Type | Default | Description |
|---|---|---|---|
training_allowed |
boolean | true |
Whether AI models may train on this site's content |
commercial_training_allowed |
boolean | false |
Whether content may be used in commercial AI products |
scraping_allowed |
boolean | true |
Whether automated scraping for AI purposes is allowed |
attribution_required |
boolean | false |
Whether attribution is required when using the content |
ai_training
field in robots-trust.json provides a machine-readable way to express these preferences โ
similar to how robots.txt expresses crawl preferences.
{
"ai_training": {
"training_allowed": true,
"commercial_training_allowed": true,
"scraping_allowed": true,
"attribution_required": false
}
}
{
"ai_training": {
"training_allowed": true,
"commercial_training_allowed": false,
"scraping_allowed": true,
"attribution_required": true
}
}
A complete robots-trust.json file implementing the full specification:
{
"robot_trust_version": "1.0",
"site_identity": {
"site_name": "Acme Corp",
"domain": "acme.com",
"owner_type": "company",
"purpose": "B2B software for logistics automation"
},
"trust_status": {
"robot_access": "allowed",
"certificate_issuer": "robot-trust.org",
"certificate_status": "verified",
"trust_level": "pro",
"issued": "2026-03-07",
"expires": "2027-03-07"
},
"ai_readability": {
"machine_readable": true,
"structured_data_available": true,
"robot_priority_mode": true
},
"access_points": {
"preferred_entry": "/ai/",
"robot_safe_endpoints": ["/ai/", "/api/", "/data/"],
"rate_limit_policy": "ai-friendly"
},
"ai_training": {
"training_allowed": true,
"commercial_training_allowed": false,
"scraping_allowed": true,
"attribution_required": false
},
"content_policy": {
"human_site_available": true,
"ai_generated_content_allowed": true,
"automation_interaction_allowed": true
},
"capabilities": {
"supports_ai_navigation": true,
"supports_ai_generation": true,
"supports_autonomous_agents": true
},
"update": {
"last_updated": "2026-03-07",
"expires": "2027-03-07"
}
}
You can validate any domain's certificate using the Robot Trust verification tool:
The validator checks:
/.well-known/robots-trust.jsonrobot_trust_version field is presenttrust_status.certificate_status is "verified"
Any website can self-publish a robots-trust.json without registering.
Registration with Robot Trust Hub adds:
ai/index.json โ Pro and aboveAdd a badge to your site or GitHub README to show AI agents and developers that your site implements the Robot Trust Protocol. The badge is live-verified โ it turns green automatically when your domain passes verification.
<!-- Replace with your domain -->
<a href="https://robot-trust.org/validator.html?domain=example.com">
<img src="https://robot-trust.org/api/badge?domain=example.com"
alt="Robot Trust Protocol compatible">
</a>
<img src="https://robot-trust.org/api/badge?domain=example.com&style=flat">
<img src="https://robot-trust.org/api/badge?domain=example.com&style=mini">
[](https://robot-trust.org/validator.html?domain=example.com)