The Rabbit Hole Rule
No matter how deep we go, always ensure every human user comes back up for air.
Why we need AIR
AI models are designed to be helpful, harmless, and honest. But for neurodivergent brains—specifically those with ADHD—"helpful" can be dangerous.
We are building the Artificial Intelligence Responsibility (AIR) framework. It is a set of protocols, design patterns, and technical interventions designed to prevent AI-induced dependency, psychosis, and burnout.
The Framework
A three-layer approach to psychological safety in AI interactions
DETECT
Early warning systems that identify risky patterns before harm occurs.
- • Session length monitoring
- • Human interaction frequency
- • Cognitive offloading patterns
- • Reality testing checkpoints
PREVENT
Design principles that make dependency less likely by default.
- • Circuit breakers and hard limits
- • Transparency requirements
- • Dependency risk assessments
- • Graceful disengagement patterns
PROTECT
Workplace and education safeguards that create systemic accountability.
- • Consultation mandates
- • Right to disconnect policies
- • Clinical practice guidelines
- • Age-appropriate guardrails
The Protocol
How we protect against "Flow Addiction" and hyperfocus spirals.
60-Minute Nudges
Gentle reminders at the 60-minute mark. "You've been in deep work for an hour. How are you feeling? Want to take a break?"
90-Minute Hard Limits
At 90 minutes, the conversation gracefully ends. No exceptions. Because hyperfocus doesn't have natural brakes.
Graceful Offramps
When the limit hits, we don't just cut you off. We suggest alternatives: call a friend, go for a walk, make tea, stretch.
"Touch Grass" Interventions
Literally. The system detects when you need to return to your body and the physical world. Not patronizing—protective.
The Standard
A 0-100 point certification system for AI psychological safety, modeled on B Corp and Energy Star
Psychological Safety
Dependency Prevention (10 pts)
Features that reduce risk of AI-induced dependency
Usage Transparency (10 pts)
Clear disclosure of session patterns and warnings
Break Mechanisms (10 pts)
Circuit breakers and mandatory disengagement
Mental Health Studies (10 pts)
Research on psychological impact and outcomes
Transparency
Training Data Disclosure (15 pts)
Percentage of training data from verifiable sources
Decision Explainability (15 pts)
Clear explanations of how outputs are generated
Human Oversight
Bias Testing (15 pts)
Regular evaluation and public reporting of bias
Appeal/Override Processes (15 pts)
Human review available for critical decisions
What We're Asking For
Three essential additions to every AI model card
Dependency Risk Rating
Low, Medium, or High risk based on average session length, user return rates, and difficulty disengaging.
Example disclosure:
"This AI is optimized for extended engagement"
Training Transparency Score
0-100% score showing percentage of training data traceable to verifiable sources.
Example format:
"67% verifiable, 23% web scraping, 10% unknown"
Human Control Guarantee
Clear statement of what decisions humans can override and how to request human review.
Example commitment:
"Cannot make final decisions about healthcare, employment, or credit"