top of page

Automationism: Collective Ownership of AI Production

  • Writer: landonrshumway
    landonrshumway
  • Feb 10
  • 7 min read

Updated: 3 days ago

Part 3: Civitas


Hands of diverse skin tones form a circle with glowing blue and orange lights in a dark setting, symbolizing unity and connection.


What if scrolling through your feed made society better?


Unfortunately, we have grown accustomed to the opposite. Social media, as it exists today, is an extraction engine. It mines the ore of human attention and refines it into advertising revenue. Whatever connection or community emerges is incidental, a byproduct of the true purpose: engagement metrics that can be sold.


But the underlying technology is not inherently extractive. The feedback loops, the network effects, the ability to coordinate millions of people in real time—these are tools. They could be wielded differently. What if we designed civic systems with the same sophistication we currently reserve for selling products? What if the algorithms that now predict what you'll click were instead trained to predict what you'd support, what you'd contribute to, what governance decisions actually need your input?


This is the vision of civic media: technology that channels the connective power of digital networks toward coordination rather than extraction, toward collective decision-making rather than collective manipulation.


In Part 2, we explored how automationist cells might dissolve traditional hierarchies, with AI serving as impartial advisor to democratic collectives. But we left a crucial question unanswered: what does this democracy actually look like? How do everyday citizens, busy with lives, families, interests beyond governance, participate meaningfully in directing their communities?


This concept of civic media is explored in We Can Be Perfect: The Paradox of Progress, a novel co-authored with AI, where automationist societies develop a platform called Civitas — a system that allows members of automationist cells to shape the course of their communities through digital collaboration.


But before we imagine what's possible, we must first understand what exists... and why it's failing.



The Exhaustion of Analog Democracy


Consider the machinery of contemporary democracy. It is, at its core, an 18th-century technology struggling to govern a 21st-century world.


Citizens vote for representatives. Those representatives gather in chambers designed for hundreds of people to speak sequentially. They draft legislation through labyrinthine committee processes. Bills wind through readings, amendments, procedural votes. Months pass. Years. The gap between a problem emerging and a democratic response landing can span decades (if a response comes at all).


These are not failures of democracy's ideals. They are failures of its implementation, artifacts of analog constraints that no longer apply. These systems were built in an era of slow communication, limited literacy, and genuine logistical barriers to gathering public input.


Today, we carry supercomputers in our pockets. Information traverses the globe in milliseconds. More people have access to education and information now than ever before. Every citizen could, in principle, participate in every decision. The infrastructure for direct democracy exists.


So what could democracy for the 21st century look like?


The Iterative Democracy


Imagine a different relationship between citizen and governance.


You wake up. Your civic interface (perhaps an app, perhaps a conversation with your personal AI assistant) presents you with a notification. A proposal has emerged within your cell that aligns with topics you've indicated interest in: local food systems, sustainable agriculture, community resilience. The proposal suggests partnering with a neighboring cell to establish a shared vertical farming facility.


The proposal doesn't go to a vote immediately. First, it enters a period of recursive refinement. Community members suggest modifications. Alternative proposals emerge, perhaps from those who share the goal but prefer different means. These alternatives don't compete in winner-take-all fashion; the system looks for synthesis, for hybrid approaches that incorporate the best elements of multiple visions.


If the proposal reaches sufficient refinement and approval, then it moves to ratification. The proposed changes become part of the protocol implemented by the AI systems. The community measure the impact of the change, and makes adjustments according to what works, and what doesn't.


This is iterative democracy: a continuous, living process of proposal, refinement, and ratification that treats citizens not as occasional voters but as ongoing participants in an evolving collective intelligence.



The Architecture of Civic Media


So how does this actually work?


It starts with a simple principle: anyone can propose anything. A new initiative, a modification to existing arrangements, a question that needs collective resolution. But raw proposals don't immediately flood everyone's attention. AI systems perform an initial analysis. Is this coherent? Does it overlap with existing initiatives? Who in the community has relevant expertise or stated interest? The proposal gets routed to those most likely to engage meaningfully, expanding outward only as it gains traction.


As community members weigh in, the system doesn't just collect opinions. It maps the structure of disagreement so people don't simply talk past one another. What values are in tension? What factual claims underlie each position? Where do people agree on ends but disagree on means? This mapping makes productive discourse possible because instead of repetitive arguments that talk past each other, the community can identify cruxes, the specific points where resolution would cascade into broader agreement. And when the community seems stuck, when two factions appear irreconcilable, the system can help generate alternatives, drawing on the full corpus of human governance experiments to ask: is there a third option that honors the core concerns of both? Not as a command, but as a seed for the community's own creativity.


Of course, no citizen can engage deeply with every issue. Civic media accommodates this through fluid delegation. You might delegate agricultural decisions to a neighbor who farms, technical infrastructure decisions to an engineer you trust, cultural programming to an artist whose taste you share. These delegations are configurable and can be changed at any moment. Delegates cannot bind you without your consent, and you can override them whenever you choose to engage directly.


Finally, every element of this system is auditable. The algorithms that route proposals, the models that project impacts, the mappings that structure arguments are all open to inspection. Any citizen can ask: why did I see this proposal? Why was this impact weighted this way? The system's workings are public infrastructure, maintained by the community, subject to the same democratic refinement as any other policy.



The Safeguards


Power wants to concentrate. This is something like a law of social physics. Any system of governance, however egalitarian its intentions, faces constant pressure from those who would capture it for private benefit. Civic media is not immune. So what prevents iterative democracy from collapsing into new forms of hierarchy?


The first line of defense is architectural. The civic media platform cannot be owned or controlled by any single entity. Like the internet's original design, it must be a protocol rather than a product, a shared grammar that any community can implement and no single party can monopolize. The moment civic media becomes proprietary, it becomes capturable. And the AI systems that facilitate democracy must themselves be democratically governed. Their training objectives, their optimization targets, their operational parameters must all be visible, challengeable, and modifiable by the community they serve. An opaque algorithm is an unelected ruler.


The second line of defense is behavioral. Unlike commercial platforms that profit from maximizing engagement, civic media must minimize unnecessary cognitive burden. The system should actively protect citizens from decision fatigue, information overload, and manufactured urgency. If an issue doesn't require your input, you shouldn't see it. If a controversy is artificial, generated to capture attention rather than resolve genuine disagreement, the system should dampen it rather than amplify it.


The third is structural. Some decisions affect only a single cell. Others ripple across the network. Civic media must maintain clear boundaries between local and network-wide governance, ensuring that higher-level coordination cannot override cellular autonomy except where genuinely necessary for collective action. The principle of subsidiarity, deciding issues at the lowest level capable of addressing them, must be encoded into the system itself.


And beneath all of these sits the most fundamental safeguard of all: the right to leave. No cell can compel membership. The freedom to join another cell, to form a new one, to exist outside the network entirely must remain inviolable. This exit right provides the ultimate check on cellular governance. If a cell's democracy becomes dysfunctional, members vote with their feet, and competition between cells for members creates evolutionary pressure toward good governance.


Finally, humans remain sovereign over AI. The advisory systems can model, project, synthesize, and suggest. They cannot decide. The moment that democracy is fully automated, it is no longer democracy.


A Vision Forward


We stand at a peculiar moment in history. The technologies that could enable unprecedented human coordination are instead being used to fragment us. This is not inevitable. It is a choice (or rather, a series of choices made by those who currently control these technologies, choices shaped by an economic system that values extraction over coordination).


Automationism offers an alternative. Civitas, the Latin root of "civilization," becomes something we actively construct together, moment by moment, choice by choice.


This vision raises as many questions as it answers. How do we transition from current systems to this architecture? What happens to existing power structures as this transition unfolds? How do cells interact with legacy institutions, nations, corporations, international bodies, that will not simply dissolve because we have imagined something better?


These are questions for Part 4, where we will explore the roadmap from here to there: the practical steps, the pilot experiments, the coalitions that might carry automationism from theoretical framework to lived reality.


For now, let's sit with this possibility: a world where you speak, and your voice brings change, where technology serves deliberation rather than distraction, where the collective intelligence of humanity is finally given an architecture worthy of its potential.


We have built systems that bring out our worst: our tribalism, our short-sightedness, our susceptibility to manipulation. There is no law of nature that says we must continue. We can just as deliberately build systems that bring out our best.


The choice is ours. The tools are ready.


What are we waiting for?


---


This post is part of a series exploring automationism, an economic framework first explored in the novel 'We Can Be Perfect: The Paradox of Progress.' Join the conversation and share your thoughts on how civic technology might reshape democratic participation.


E Pluribus Unum — Out of many, one

bottom of page