The Open Standard for
AI Permissions
A decentralized protocol for managing consent between humans and AI systems. No single entity controls it.
Why LLMConsent?
Truly Decentralized
No admin keys, no central control. Built on blockchain with community governance.
User Sovereignty
Users own their consent tokens and control exactly how AI systems use their data.
Attribution Tracking
Cryptographic proofs showing how training data influenced model outputs.
Fair Compensation
Automatic micropayments when your data is used for training or inference.
Digital Twins
Persistent user models that evolve and can be shared across AI systems.
Agent Permissions
Granular control over what autonomous AI agents can do on your behalf.
The Four Core Standards
View the full technical specifications on GitHub
Core Consent
Basic consent tokens for training, inference, and synthetic data usage with attribution tracking.
Digital Twin
User-owned persistent models that evolve with interactions across AI systems.
Agent Permissions
How autonomous agents request and receive permissions with delegation chains.
Memory Sharing
Cross-agent memory pools for continuous experiences while maintaining control.
Simple Integration
# Grant consent for AI training
from llmconsent import ConsentClient
client = ConsentClient(network="arbitrum")
# User grants consent with automatic compensation
consent = await client.grant_consent(
data="my_data.txt",
purpose="training",
models=["gpt-5", "claude-3"],
compensation=0.001 # ETH per use
)