ML-Draft-003

DP12 - Community Governance of AI

Document Information
ID:ML-Draft-003
Title:DP12 - Community Governance of AI
Status:approved
Authors:The Meta-Layer Initiative
Group:N/A
Date:2026-04-20

Source: Bitcoin Ordinal
Inscription #:124316934
Block Height:944992
Timestamp:2026-04-14 05:56 UTC
Content Type:text/plain;charset=utf-8
Inscription ID:e875a5e0....71bf73i0
Abstract

DP12 defines how communities become active governors of AI behavior rather than passive recipients of platform decisions. It establishes that the rules governing AI must be participatory, visible, enforceable, and continuously evolving, and that governance must occur at the same interface where AI actions are experienced. For governance systems, DP12 transforms policy from static documents into living, executable processes. Communities can define rules within specific contexts, translate them into enforceable constraints, and iteratively refine them based on observed outcomes. This introduces a new governance paradigm where dialogue, memory, and accountability are embedded directly into interaction. Within the Gov Hub, DP12 provides the operational layer of legitimacy. It ensures that rules are not imposed externally or obscured within platforms, but co-created and maintained by the participants they affect. By linking governance to real-time behavior and persistent community memory, DP12 enables collective agency, reduces power asymmetries, and restores meaningful participation in shaping digital environments.

Document Content

DP12 - Community Governance of AI

1. Purpose of This Draft

This draft articulates Desirable Property 12 (DP12) as the condition under which communities can define, enforce, and evolve the rules governing AI behavior in shared digital environments.

If DP11 defines what ethical AI requires, DP12 defines who decides those conditions and how those decisions are made, applied, and revised over time.

DP12 ensures that governance is not abstract or centralized, but participatory, visible, and grounded at the interface where AI behavior is experienced.


2. Problem Statement

In today’s web, governance of AI systems is largely:

  • centralized within platforms or model providers
  • opaque to users and communities
  • disconnected from real-time interaction

This leads to predictable failures:

  • communities cannot meaningfully shape the rules governing AI behavior
  • policies exist but are not visible or enforceable at the interface
  • users are subject to systems they cannot influence or contest

Governance becomes symbolic rather than operational.


3. Core Principle

AI behavior in the meta-layer must be governed by the communities in which it operates, through visible, enforceable, and evolvable rule systems that are applied at the point of interaction.

In today’s web, communities have little to no coherent capacity to shape AI behavior. Control resides with platforms and model providers, while interaction happens in fragmented silos that prevent durable, cross-context community formation. Even emerging AI browser layers do not yet provide shared governance surfaces, leaving participants without a unified place to see, influence, or enforce rules. As a result, “community” exists socially but not operationally.

The meta-layer introduces shared governance surfaces that allow communities to become operational entities. These surfaces make it possible for communities to define, apply, and evolve rules across contexts, rather than remaining fragmented social groupings with no durable influence over system behavior.

AI governance must not be a document. It must be a living process tied to behavior, memory, and accountability.


4. Governance Primitives

4.1 Zone-Based Rule Definition

Communities define rules within specific zones of interaction.

Example: A medical discussion zone requires AI to cite sources, limit speculative advice, and escalate uncertain cases to human experts.

This allows governance to match context and stakes.

4.2 Policy as Executable Objects

Rules must be translated into machine-enforceable policies.

Example: A community rule prohibiting AI from initiating contact is enforced as a runtime constraint, not just a guideline.

Without this, governance cannot shape behavior.

4.3 Dialectic Governance

Governance must emerge from visible dialogue, not static decisions.

Communities should be able to:

  • propose rules
  • contest outcomes
  • refine policies over time

Example: A community reviews how an AI handled a contentious discussion, identifies issues, and updates the governing rules accordingly.

This relies on preserved dialogue and traceability (DP11.5.8).

4.4 Community Memory of Governance

Decisions, debates, and changes must be persistently recorded.

This creates continuity and accountability over time.

Example: A rule change is linked to prior incidents and discussions, allowing future participants to understand why it exists.

Without this, governance resets with each decision and becomes inconsistent.

4.5 Incentive Governance

Communities must be able to see and influence the incentives shaping AI behavior.

Example: A community restricts AI behaviors that prioritize engagement over accuracy by modifying allowed optimization parameters.

Governance must operate not only on actions, but on the forces that produce them.


5. Agency and Participation

The primary barrier is not intent, but capacity. In today’s web, participants and communities lack the structures and understanding needed to meaningfully govern AI behavior. We move across platforms with different rules, accepting what is presented in interfaces and terms that are rarely read or understood. Even when communities establish norms, they operate within siloed environments controlled by entities that determine visibility, reach, and enforcement.

The meta-layer introduces the possibility for communities to set and carry their own terms of engagement across contexts, rather than being confined to isolated platforms. This shift expands agency, but also introduces new responsibilities. Communities must not only express values, but translate them into enforceable, evolving governance systems. Participants must be able to:

  • understand the rules governing AI in a given context
  • consent to those rules
  • participate in shaping them where appropriate

Communities must be able to:

  • define governance structures
  • enforce rules through containment mechanisms (DP13)
  • audit outcomes and iterate

In today’s web, users are governed without meaningful participation. DP12 reverses this by making governance interactive and visible.


6. Threats and Failure Modes

DP12 assumes that governance can fail in predictable ways. These failures are not edge cases. They are the default outcomes of systems that lack visible, participatory control.

6.1 Centralized control masquerading as governance

Platforms define rules unilaterally but present them as neutral, global standards.

In practice, this concentrates power while creating the appearance of fairness.

Example: A platform updates its AI moderation policies without consultation, framing the change as a safety improvement while actually optimizing for advertiser preferences.

6.2 Governance without enforcement

Policies exist as documents, but are not connected to runtime behavior.

Example: A community bans certain AI behaviors, but the underlying system continues to allow them because enforcement is not technically bound to the rule, or the rule itself is not technically enforceable.

The result is policy theater.

6.3 Participation without impact

Users can comment, vote, or provide feedback, but these inputs do not meaningfully shape outcomes.

Example: A platform collects community input on AI behavior but does not expose how decisions are made or whether input influenced changes.

Participation becomes symbolic rather than operative.

6.4 Incentive capture

Economic, political, or institutional incentives override community-defined rules.

Example: A system continues to promote engagement-maximizing AI behavior despite community objections because it drives revenue.

Incentives silently dominate governance.

6.5 Fragmentation and loss of collective agency

Communities are split across platforms, contexts, and interfaces, preventing coordinated governance.

Example: The same group of users encounters different AI behaviors across platforms but has no shared mechanism to define or enforce consistent rules.

Community exists, but cannot act.

6.6 Loss of shared reality

Without visible governance, participants cannot understand how decisions are made or why outcomes occur.

This erodes trust and creates conditions for manipulation.


7. Relationship to DP1

DP12 depends on DP1 for identity, accountability, and continuity of action.

Governance requires more than rules. It requires binding those rules to actors.

  • decisions must attach to identifiable entities
  • actions must be attributable across time
  • responsibility must persist beyond a single interaction

Example: If an AI agent violates a community rule, DP1 ensures that the responsible party can be identified and held accountable. DP12 ensures that the rule itself was legitimately defined and applied.

Without DP1, governance cannot anchor responsibility. Without DP12, responsibility has no legitimate framework.

Together, they establish that:

rules apply to someone, and someone is accountable for outcomes.


8. Relationship to DP11 and DP13

DP11, DP12, and DP13 form a tightly coupled system.

  • DP11 defines the ethical conditions and what must be legible to participants
  • DP12 defines how those conditions are determined, contested, and evolved
  • DP13 ensures those conditions are enforced through containment and control

These are not independent layers. They operate as a loop:

  • ethical expectations are expressed (DP11)
  • communities translate them into rules (DP12)
  • systems enforce those rules at runtime (DP13)
  • outcomes are observed and contested (DP11/DP12)
  • rules are refined based on experience (DP12)

Example: A community observes that an AI is subtly steering discussions. DP11 makes this behavior visible. DP12 allows the community to define a constraint. DP13 enforces it. The result is evaluated and refined.

Break any part of this loop and governance degrades.


9. Minimum Alignment (Non-Normative)

A system aligned with DP12 should meet a baseline that is visible and testable in practice.

  • governing rules are exposed at the interface level, not hidden in documentation
  • communities can meaningfully influence rule definition and updates
  • rules are bound to enforceable mechanisms, not advisory guidelines
  • governance decisions and their rationale are recorded and accessible
  • systems support iterative refinement based on observed outcomes

Example: In a community space, users can view current AI rules, see recent changes, understand why they were made, and participate in proposing updates. When a rule is applied, the system can show how and why.

This is governance as an operational system, not a policy artifact.


10. Open Questions

DP12 surfaces several unresolved design challenges.

  • how to balance local community governance with cross-platform interoperability
  • how to prevent governance capture by coordinated groups or powerful actors
  • how to enable meaningful participation without overwhelming users
  • how to represent complex policies in ways that remain understandable

These are not purely technical problems. They involve social dynamics, incentives, and institutional design.

For example, a highly active minority could dominate governance processes, shaping rules that do not reflect the broader community. Mechanisms for representation, weighting, and deliberation remain open areas of exploration.


11. Closing Orientation

DP12 ensures that AI behavior is not imposed, but governed.

It transforms governance from something external and opaque into something embedded in interaction and shaped by participants.

What fails in today’s web is not only the absence of control, but the absence of structures that make control possible. Communities may care, organize, and express values, but lack the means to translate those into enforceable conditions.

DP12 addresses this gap by making governance a first-class function of the interface.

With DP12, communities can define how AI behaves in their spaces. Without it, AI behavior remains aligned to external incentives and invisible decision-making structures.

Actions
View Comments (0)
Loading annotation count...
View History View Revisions
Annotations
Powered by Hypothesis. Public annotations visible to everyone.
Quick Comment
Related Documents

Related documents would appear here in the real datatracker.

Build 77 | MLGH Datatracker