What MCP-AQL is, why it exists, and what to read first
MCP-AQL keeps MCP transport but adds something MCP itself does not provide: built-in semantic classification. Clients can tell whether they are looking at a read, mutation, deletion, or execution flow from the endpoint itself instead of guessing from prose.
On this page
Jump to a section
Use the outline to move through longer pages without losing your place.
Related reading
Go Deeper In The Full Spec
Read the canonical protocol text here on the site when you want the full normative detail behind this overview.
Keep Moving Through The Library
Continue into the core rules, structured response model, and security controls that shape real implementations.
The one-paragraph version
MCP-AQL is a protocol specification and implementation toolkit for exposing adapter operations through either five CRUDE endpoints
or a single routed endpoint. It keeps introspect mandatory, makes operation intent structural at the endpoint layer,
treats the versioned spec as canonical, and positions production profiles like DollhouseMCP as practical references rather than the
definition of the protocol.
The semantics gap MCP-AQL closes
What plain MCP leaves open
MCP standardizes how tools register and how calls are made, but it does not encode what an operation means. A harmless query and a destructive deletion share the same structural shape, so the only signal about intent is free-text description.
{
"name": "get_user",
"description": "Return a user record"
}
{
"name": "drop_database",
"description": "Delete the production database"
}
What MCP-AQL adds
MCP-AQL makes intent structural. Routing an operation through READ declares it side-effect-free. Routing through
DELETE declares it destructive. That makes safety classification available to clients, LLMs, and policy systems
without parsing descriptions or guessing from naming conventions.
READmeans safe query semanticsDELETEmeans destructive intent is explicitEXECUTEmeans lifecycle or side-effectful runtime flow
Token efficiency story
| Mode | Tool definitions | Approximate token cost | Reduction |
|---|---|---|---|
| Discrete tools (50+) | Every tool definition loaded up front | ~30,000 tokens | Baseline |
| CRUDE mode (5 endpoints) | Semantic endpoints plus runtime discovery | ~4,500 tokens | ~85% |
| Single mode (1 endpoint) | One routed endpoint plus runtime discovery | ~1,100 tokens | ~96% |
The introspection spec models the on-demand path even more sharply: around 29,600 upfront tokens for discrete tools versus around 2,600 total for MCP-AQL plus introspection across ten operations.
Why introspection matters
Discovery becomes demand-driven
Instead of loading every tool schema at connection time, clients can ask only for the operations and parameter details they need right now.
Introspection is first-class
introspect is mandatory, so runtime discovery is part of protocol behavior rather than an optional convenience layer.
Token savings stay explainable
The reduction is not magic. It comes from moving capability discovery from up-front schema bulk into small runtime lookups.
Quick-start request shape
Discover operations
{
"operation": "introspect",
"params": { "query": "operations" }
}
Call a discovered operation
{
"operation": "create_user",
"params": {
"email": "alice@example.com",
"name": "Alice"
}
}
The second example is schematic. Real operation names and parameters are adapter-defined and should be discovered first.
End-to-end walkthrough
1. Discover operations
{
"operation": "introspect",
"params": { "query": "operations" }
}
{
"success": true,
"data": {
"operations": [
{ "name": "get_user", "endpoint": "read" },
{ "name": "create_user", "endpoint": "create" },
{ "name": "delete_user", "endpoint": "delete" }
]
}
}
2. Read safely
{
"operation": "get_user",
"params": { "user_id": "user_123" }
}
{
"success": true,
"data": {
"id": "user_123",
"email": "alice@example.com",
"name": "Alice"
}
}
3. Create state
{
"operation": "create_user",
"params": {
"email": "alice@example.com",
"name": "Alice"
}
}
{
"success": true,
"data": {
"id": "user_123",
"created": true
}
}
4. Handle malformed input
{
"operation": "get_user",
"params": {}
}
{
"success": false,
"error": {
"code": "VALIDATION_MISSING_PARAM",
"message": "Missing required parameter 'user_id'",
"details": {
"param_name": "user_id"
}
}
}
5. Ask for confirmation before delete
{
"operation": "delete_user",
"params": { "user_id": "user_123" }
}
{
"success": false,
"error": {
"code": "CONFIRMATION_REQUIRED",
"message": "This operation requires confirmation",
"details": {
"danger_level": "destructive",
"confirmation_token": "conf_abc123xyz"
}
}
}
6. Retry with the confirmation token
{
"operation": "delete_user",
"params": {
"user_id": "user_123",
"_confirmation": "conf_abc123xyz"
}
}
{
"success": true,
"data": {
"deleted": true
}
}
The same logical flow works in CRUDE mode or single-endpoint mode. What changes is the transport surface, not the operation contract, introspection model, or structured response envelope.
How this differs from familiar systems
MCP
MCP-AQL is layered on MCP transport. The difference is semantic endpoints, runtime discovery, and built-in safety classification.
GraphQL
Both consolidate discovery behind a query layer, but GraphQL requires an SDL and type system while MCP-AQL reuses MCP transport.
REST
REST uses HTTP verbs for semantics. MCP-AQL applies a similar principle to the MCP tool layer, where HTTP verbs do not exist.
Suggested reading order
- Read the versioned draft for the canonical rules.
- Use Protocol Core for routing, request shape, endpoint semantics, and the MCP-versus-MCP-AQL contrast.
- Use Error Model and Security Model for operational expectations.
- Use Conformance and Launch Checklist to understand current draft maturity.