6 Comments
User's avatar
James Rutt's avatar

My reply to Jordan's Tweet:

[https://x.com/jim_rutt/status/1884646488040456410]

You raise some sharp points about AI governance and the role of sacred institutions, but we need to be careful about how we frame and operationalize these ideas. Let me explain why.

First, I agree that the market-state dichotomy isn’t enough to handle AI governance. The complexity and potential impact of AI systems go far beyond traditional governance models. But invoking "the Church" as the solution requires some serious unpacking.

When we talk about the Commons—and I appreciate you making that connection—we’re discussing shared resources and governance mechanisms that emerge from community needs and values. That’s fundamentally different from historical church structures, which often hardened into rigid hierarchies claiming metaphysical authority. The Commons works because it’s rooted in practical necessity and empirical outcomes, not because it derives power from sacred status.

Your suggestion that the Church should govern AI reminds me of what I’ve written about the "sacred" in relation to complex systems: it can be a useful operational tool, but we have to remember that’s all it is—a tool, not a metaphysical truth. The moment we forget that distinction, we risk creating new forms of dogma just as problematic as pure market or state control.

On AI alignment and the concept of "Soul"—this is where I have to part ways with you. While I get the impulse to frame AI alignment in terms of deep personal connection, bringing in metaphysical concepts like "Soul" muddies the waters instead of clarifying the challenge. The alignment problem is, at its core, about designing systems that reliably pursue beneficial outcomes while respecting human values and autonomy. We should be talking about measurable behaviors, verifiable constraints, and empirical outcomes—not metaphysics.

The idea of an "AI priest class" particularly concerns me. History shows that priesthoods, even when they start with noble intentions, tend to evolve into self-perpetuating power structures that resist change and empirical scrutiny. Instead of priests, we need transparent governance structures that blend technical expertise with broad stakeholder representation.

That said, there’s a kernel of truth in your proposal that’s worth exploring. Governing AI—much like managing natural ecosystems—means grappling with staggering complexity, where purely analytical tools may fall short. This is where well-constructed narratives and frameworks—your "sacred" approach—might actually help with sense-making and decision-making.

But these frameworks have to stay grounded in operational reality. They should be subject to revision as our understanding evolves and circumstances shift. The moment we start treating them as immutable truth rather than practical tools, we risk recreating the same rigid, dogmatic structures the Enlightenment helped us move past.

Think of it this way: when I mentor young people through Heinlein novels, I’m not asking them to treat the stories as sacred text. I’m using narrative as a tool to help them wrestle with complex ideas—responsibility, society, personal growth. The stories work because they’re understood as tools, not dogma.

Likewise, any AI governance framework—whether we call it Commons, Church, or something else—must keep this practical, operational focus. It should be judged by its ability to produce beneficial outcomes, not by appeals to metaphysical authority or traditional hierarchies.

Bottom line: while I agree we need governance structures beyond the market and state, those structures must emerge from practical necessity and empirical observation—not religious or metaphysical claims. The answer isn’t to resurrect traditional church models but to develop new, adaptive frameworks capable of handling AI’s complexity while staying grounded in operational reality and empirical validation.

The real challenge isn’t finding a sacred authority to govern AI. It’s building governance systems that can effectively manage complexity while staying adaptable and accountable. That means combining technical expertise with broad stakeholder input—all while keeping a sharp focus on measurable outcomes, not metaphysical justifications

Expand full comment
Jasen Robillard's avatar

As you draw out, there's an aspect of modal confusion creeping in when we make metaphysical claims. We are naturally inclined to collapse the dimensionality of complexity and of blurring map and territory to make things more graspable.

"But these frameworks have to stay grounded in operational reality. They should be subject to revision as our understanding evolves and circumstances shift. The moment we start treating them as immutable truth rather than practical tools, we risk recreating the same rigid, dogmatic structures the Enlightenment helped us move past.

Think of it this way: when I mentor young people through Heinlein novels, I’m not asking them to treat the stories as sacred text. I’m using narrative as a tool to help them wrestle with complex ideas—responsibility, society, personal growth. The stories work because they’re understood as tools, not dogma."

As a mentoring tool, I'm looking forward to hearing more dialogue on this with Jordan.

One aspect that I see at play is that Technae is increasingly going to be the strongest narrative pull on culture. And where stories go, so goes the cultural red religion dogma and our collective blue church narratives about ethics.

Technae as an interventionist vehicle of Market and State will increasingly appear to be of the church-dharma domain. We've seen perennial ethical collapses within the Market, the State and the Church. Technae will be no different and is subject to similar category of blindspot. But perhaps Technae is where we can sense into and find the needed vulnerability, transparency and accountability needed for deep trust and a new form of democratic polity. Maintaining separation while bridging malleable collaborations between State, Market, Church and Technae seems closest to the required threading of the needle we're seeking for a breakthrough.

Expand full comment
John Stokdijk's avatar

I needed to ask Perplexity "What is Technae?" and got a useful lesson in Technē.

Expand full comment
John Stokdijk's avatar

In summary, technē represents a rich philosophical tradition that connects practical skills with deeper knowledge and ethical considerations, highlighting the importance of understanding the "how" and "why" behind actions.

Expand full comment
Jasen Robillard's avatar

I'm not sure how I landed on "Technae"... a poetic flourish and some sloppy mixing of metaphors.

I'm trying to somehow differentiate our deep belief systems from the current Schelling point (simplified perspective) on technology, particularly as we work through the paradigm on offer through AI, AGI, etc. In a way, I think this might be what Jordan is pointing to: our underlying belief systems, including that of the secular "cultural Christian" are difficult to categorically differentiate from religious dogma. Our embodied behaviours betray an underlying dogma even when we profess otherwise (e.g. Dawkins).

“Any sufficiently advanced technology is indistinguishable from magic."

- A.C. Clarke

So, if technology is experienced by the layperson as magic, what roles do State, Market and Church play in navigating us through an AI-mediated reality?

Choosing one and only one is unlikely to be fruitful, but which agent (or actant) is best to advocate for the ethics and epistemics of generative, Long Now care?

https://plato.stanford.edu/entries/episteme-techne/

Expand full comment
John Stokdijk's avatar

I read Robert Heinlein as a teenager in the 1960s and got a better education than in science class.

Expand full comment