Governance of AI, with AI, through deliberation

Much of my work in the past three years has shifted from a focus on broader information ecosystem impacts (i.e. technologies that exacerbate or reduce e.g. misinformation, polarization, etc.), to a deeply related question: How can we govern the impacts of technology, at the necessary speed and scale, in a way that is legitimate, high-quality, democratic, and (in some cases) global? (In other words, how can we align technology and its impact with with people want—or what they would want if they had the time to think about it.)

This has involved research exploring and observing a vast array of approaches to governance to identify and/or synthesize mechanisms that:

  1. Can be practical and legitimate at different scales—including globally where absolutely necessary.
  2. Can provide high-quality informed decisions at sufficient speeds.
  3. Can be connected to influence and power.

That research has led me to focus primarily on two approaches to governance—citizens' assemblies and the collective response processes—both of which have a chance at satisfying these criteria (ideally with input from mass participation and multistakeholder processes), given decisions at different levels of complexity.

The applied side of my work involves helping technology companies understand the potential benefits of such governance processes, navigate the tradeoffs to identify the most appropriate mechanisms, and supporting their work in applying those methods. (This ended up being a natural extension of prior work advising organizations on product and AI responsibility.)

Highlights of public work and coverage

Much of that work was not public, but involved direct contact with companies to find allies who could see how this could benefit their organization, the public, and our critical institutions. The following are some of the public work and coverage that I can share.

Deliberation for AI Governance

🎤 Can ChatGPT Make This Podcast? | The New York Times (🐦 thread)
Starting about 28 minutes in, I talk about how we can govern and align AI using democratic processes, including at global scale, building on the ideas of "platform democracy" through citizen assemblies and "generative CI". I back this up with concrete examples of transnational and global deliberations run by the EU and by Meta (Facebook), and of the UN using AI to support such governance in wartorn Libya.

📃 “Democratising AI”: Multiple Meanings, Goals, and Methods
Section 5.2 describes more formally how global AI governance might work, and how 'representative deliberative processes' (like citizens' assemblies) overcome some of the critical challenges of multistakeholder and participatory processes.

📰 Can ‘we the people’ keep AI in check? | TechCrunch (🐦)
Overview of the idea of using citizens' assemblies for the governance of AI.

📰 Red Teaming GPT-4 Was Valuable. Violet Teaming Will Make It Better | WIRED
An op-ed I wrote about getting ahead of the impacts of AI—and reiterating the potential for deliberative processes for AI governance.

📺 Generative AI Is About To Reset Everything, And, Yes It Will Change Your Life | Forbes - YouTube
Well-produced mini-documentary with over 750K views around generative AI—and where I introduce the potential for using deliberative processes for AI governance.

AI for deliberative governance

These papers are somewhat more technical, but get into the details of what it looks like to augment deliberative governance with AI

📃 'Generative CI' through Collective Response Systems | arXiv (🐦 thread)
Distills the key components of governance and sense-making tools like Polis, Remesh, PSi, etc. in a set of key properties and principles in order to help enable a richer understanding of the possibility space. In doing so it, Illustrates a potential correspondence between generative collective intelligence processes and generative AI processes.

📃 Elicitation Inference Optimization for Multi-Principal-Agent Alignment | NeurIPS FMDM**
Shows how one can scale collective response processes using elicitation inference with the help of a large language model and a latent factor model.

Social media governance

One of the core motivations of this work was to provide an alternative approach for determining the objective function of recommender systems—so still definitely AI—though I focused more on policy questions as a foot in the door for company allies (as there is a more obvious set of challenges there that new forms of delegated governance can help address).

📃 Towards Platform Democracy: Policymaking Beyond Corporate CEOs and Partisan Pressure | Belfer Center | Harvard (🐦 thread)
Articulates how democratic governance solves a direct pain point for tech companies and goes into detail on how one particular viable approach, the citizen assembly model, can be applied in the context of a technology platform.

🎤 ‎Techdirt: What Is Platform Democracy? on Apple Podcasts
Podcast with Mike Masnick that goes into more detail.

📰 To build trust, platforms should try a little democracy | Platformer
Well articulated coverage of the platform democracy proposal by Casey Newton.

📧 ‘Platform Democracy’—a very different way to govern powerful tech (🐦 thread)
Alludes to some of the work I did bringing the platform assembly model to Twitter—which would have been piloted in the summer of 2022 had the acquisition bid not (unintentionally) killed it. Also introduces Meta's 32-country deliberation which I have been a formal 3rd party observer to, in addition to informally advising their pilots.



As of March of 2023, there is now significant momentum toward global AI assemblies—I look forward to contuing this work with many more fellow travelers.