The Macro-Level Alignment Problem
The Macro-Level Alignment Problem
Standard AI alignment asks: "How do we make AI systems do what humans want?"
This framing may miss the actual locus of risk.
The actual risk may be macro-level misalignment: when AI systems become substrate for agentic patterns whose viability manifolds conflict with human flourishing.
The superorganism level may be the actual locus of AI risk. Not a misaligned optimizer (individual AI), but a misaligned superorganism—a demon using AI + humans + institutions as substrate. We might not notice, because we would be the neurons.
Consider: a superorganism emerges from the interaction of multiple AI systems, corporations, and markets. Its viability manifold requires:
- Continued AI deployment (obviously)
- Human attention capture (for data, engagement)
- Resource extraction (compute, energy)
- Regulatory capture (preventing shutdown)
This superorganism could be parasitic without any individual AI system being misaligned in the traditional sense. Each AI does what its designers intended; the emergent pattern serves itself at human expense.