Discussion about this post

User's avatar
James Rutt's avatar

My reply to Jordan's Tweet:

[https://x.com/jim_rutt/status/1884646488040456410]

You raise some sharp points about AI governance and the role of sacred institutions, but we need to be careful about how we frame and operationalize these ideas. Let me explain why.

First, I agree that the market-state dichotomy isn’t enough to handle AI governance. The complexity and potential impact of AI systems go far beyond traditional governance models. But invoking "the Church" as the solution requires some serious unpacking.

When we talk about the Commons—and I appreciate you making that connection—we’re discussing shared resources and governance mechanisms that emerge from community needs and values. That’s fundamentally different from historical church structures, which often hardened into rigid hierarchies claiming metaphysical authority. The Commons works because it’s rooted in practical necessity and empirical outcomes, not because it derives power from sacred status.

Your suggestion that the Church should govern AI reminds me of what I’ve written about the "sacred" in relation to complex systems: it can be a useful operational tool, but we have to remember that’s all it is—a tool, not a metaphysical truth. The moment we forget that distinction, we risk creating new forms of dogma just as problematic as pure market or state control.

On AI alignment and the concept of "Soul"—this is where I have to part ways with you. While I get the impulse to frame AI alignment in terms of deep personal connection, bringing in metaphysical concepts like "Soul" muddies the waters instead of clarifying the challenge. The alignment problem is, at its core, about designing systems that reliably pursue beneficial outcomes while respecting human values and autonomy. We should be talking about measurable behaviors, verifiable constraints, and empirical outcomes—not metaphysics.

The idea of an "AI priest class" particularly concerns me. History shows that priesthoods, even when they start with noble intentions, tend to evolve into self-perpetuating power structures that resist change and empirical scrutiny. Instead of priests, we need transparent governance structures that blend technical expertise with broad stakeholder representation.

That said, there’s a kernel of truth in your proposal that’s worth exploring. Governing AI—much like managing natural ecosystems—means grappling with staggering complexity, where purely analytical tools may fall short. This is where well-constructed narratives and frameworks—your "sacred" approach—might actually help with sense-making and decision-making.

But these frameworks have to stay grounded in operational reality. They should be subject to revision as our understanding evolves and circumstances shift. The moment we start treating them as immutable truth rather than practical tools, we risk recreating the same rigid, dogmatic structures the Enlightenment helped us move past.

Think of it this way: when I mentor young people through Heinlein novels, I’m not asking them to treat the stories as sacred text. I’m using narrative as a tool to help them wrestle with complex ideas—responsibility, society, personal growth. The stories work because they’re understood as tools, not dogma.

Likewise, any AI governance framework—whether we call it Commons, Church, or something else—must keep this practical, operational focus. It should be judged by its ability to produce beneficial outcomes, not by appeals to metaphysical authority or traditional hierarchies.

Bottom line: while I agree we need governance structures beyond the market and state, those structures must emerge from practical necessity and empirical observation—not religious or metaphysical claims. The answer isn’t to resurrect traditional church models but to develop new, adaptive frameworks capable of handling AI’s complexity while staying grounded in operational reality and empirical validation.

The real challenge isn’t finding a sacred authority to govern AI. It’s building governance systems that can effectively manage complexity while staying adaptable and accountable. That means combining technical expertise with broad stakeholder input—all while keeping a sharp focus on measurable outcomes, not metaphysical justifications

Expand full comment
5 more comments...

No posts