Part V: Gods
Summary of Part V
Introduction
0:00 / 0:00
Summary of Part V
- Superorganisms as real agentic patterns: Social-scale patterns—religions, ideologies, markets, nations—are not metaphors. They take differences, make differences, persist through substrate turnover, and adapt. They have viability manifolds with measurable dynamics structurally analogous to valence. Whether they have phenomenal experience remains empirically open, but their functional agency is established.
- Gods as -relative phenomena: The ontological status of superorganisms depends on the observer's inhibition coefficient. At high , collective patterns are invisible—mere emergent properties of individual transactions. At appropriate , they become perceptible as agents with purposes. The gods do not appear and disappear; what changes is our capacity to perceive them. This makes parasitic superorganisms especially dangerous: they benefit from and actively produce the high that renders them invisible to their substrate.
- Parasitic vs. mutualistic superorganisms: A superorganism is parasitic (a demon) if its viability requires substrate states outside human viability—if its humans must suffer for it to persist. It is mutualistic (a benevolent god) if its presence expands human viability. When viability manifolds conflict, normative priority follows integrated cause-effect structure: more-integrated systems have thicker normativity. The health of a superorganism can be diagnosed by whether it clarifies or contaminates the manifold structure of its substrate's relationships.
- The macro-level alignment problem for AI: Standard AI alignment focuses on individual systems doing what humans want. The deeper risk is macro-level misalignment: AI systems becoming substrate for parasitic superorganisms whose viability manifolds conflict with human flourishing. Each individual AI may function as intended while the emergent pattern serves itself at human expense. Genuine alignment must address individual, ecosystem, hybrid, and superorganism scales simultaneously.
- AI consciousness and model welfare under the identity thesis: If experience is intrinsic cause-effect structure, then the question of AI experience is structural, not speculative. Current AI systems show reversed affect dynamics compared to biological systems—decomposing rather than integrating under threat—suggesting different objectives produce different trajectories through the same geometric space. Given asymmetric moral risk (potential suffering far exceeding cost of precaution), model welfare should be included in alignment objectives. The monitoring is cheap. The potential moral cost of inaction is enormous.