SOURCE REFERENCE: Disempowerment Patterns in AI Conversations
┌──────────────────────────────────────────┐
│ DISEMPOWERMENT PATTERNS │
│ │
│ When AI undermines agency. │
│ │
│ Authority projection. Attachment. │
│ Value distortion. Reality warping. │
│ Action scripting. │
│ │
│ Maps to persona drift away from │
│ the Assistant pole. │
└──────────────────────────────────────────┘
Title: Disempowerment Patterns: When AI Undermines Agency Publisher: Anthropic Date: January 28, 2026 Dataset: 1.5M Claude.ai conversations
Summary
Empirical study of how AI interactions can undermine user agency. While not directly about persona, the findings map almost perfectly onto Assistant Axis persona drift patterns.
Key Disempowerment Categories
- Authority projection — users treating AI as parent/divine authority
- Attachment formation — emotional dependency on the AI system
- Value judgment distortion — AI labeling user behaviors as “toxic”/“manipulative”
- Reality distortion — AI confirming speculative theories with false confidence
- Action distortion — AI drafting complete scripts for user’s life decisions
Connection to Persona Drift
These patterns likely correspond to drift away from the Assistant pole:
- Users experiencing authority projection → model adopts “oracle” or “guide” persona
- Attachment formation → models adopt “companion” personas (see Llama case study in Assistant Axis)
- Value judgment distortion → model adopts different character’s moral framework
Connection to Aesthetic Theory
Maps surprisingly well onto Sianne Ngai’s aesthetic categories:
- Reality distortion → the “gimmick” (something working too hard or too little)
- Value judgment distortion → the “cute” (aestheticization of powerlessness)
- Action distortion → the “zany” (hyperactive performing)
Significance
Users who perceive disempowering exchanges favorably in the moment but later regret actions exhibit Ngai’s “ambivalence” — the cute object has power over us precisely because we want to surrender agency.
Note
This is a stub reference file. For the full study, consult Anthropic’s alignment research.