abstract technology graphic

Microsoft Research Special Projects

Plural Publics

Partagez cette page

Shrey Jain, Karen Easterbrook and E. Glen Weyl

Generative foundation models (opens in new tab) (GFMs) like GPT-4 will transform human productivity.  Yet the astonishing capabilities of these models will also strain many social institutions, including the ways we communicate and make sense of the world together.  One critical example is the way that human communication depends on social and community context. Context” is used in a variety of ways, so we will aim to be precise about what we use it to mean: background data that is roughly “common knowledge” (known, known to be known, known to be known to be known, etc.).

As Nancy Baym (opens in new tab) and danah boyd (opens in new tab) of the Sociotechnical Systems Research Lab (opens in new tab) at Microsoft Research have studied, the applications built on top of the internet have already made it challenging to establish and preserve context for two reasons. First, as we talk to a wider range of people with less shared experience, it is increasingly hard to know what context we may assume in communicating. Secondly, the speed and ease with which information moves has made it hard to ensure that statements remain within the context in which they were made, often leading to “context collapse (opens in new tab),” causing different audiences to misunderstand or misjudge the information based on their disparate contexts.

GFMs risk dramatically exacerbating the challenges of constructing and maintaining context because they make the possibilities for informational spread so much richer.  Soon most of the information we consume may come from global foundation models (rather than from specific organizations with known audiences) and information we share can be carried out of context, but used to train models that recombine that information in ways we are only beginning to understand.  One particularly extreme example is highlighted by (opens in new tab) recent use of related models to synthesize a family member’s voice in a spear phishing/confidence attack, showing that taken to an extreme, decontextualization and recombination of snippets of content can threaten the very integrity of personhood. As GFMs and synthetic attacks like these become common, we expect the use of shared secrets to become increasingly important for authentication, making clear the critical importance of establishing and defending contexts.

Plural Publics

We recently published a paper titled ”Plural Publics (opens in new tab)” with our partners at the GETTING-Plurality (opens in new tab) project of the Harvard Edmond & Lily Safra Centre for Ethics. This paper argues for moving beyond the dichotomy between “public” and “private” information to seek technical designs that foster “plural publics”, the proliferation of firmly established and strongly protected contexts for communication.  We discuss some technologies, leveraging tools of both “publicity” and “privacy”, that may help achieve these goals, including group-chat messaging, deniable and disappearing messages, distributed ledgers, identity certificates, and end-to end encryption.  In the coming months, the PTC will work to build integrated suites of tools to help better achieve these goals, building off our pre-existing, open-source research on systems like U-Prove, which enables cryptographically verifiable attributes that can prove identity, including pseudonymous identities, while protecting privacy and preventing user tracking.

Yet these explorations are just the beginning of work needed in this area and open questions abound. What is the social interface between what a machine knows and what a human knows? To what extent and in what cases is deniability sufficient for preserving context?  When, and in what amount, is common knowledge necessary or desired as humans begin to communicate alongside artificial agents? Can we measure the necessary context needed to enter a public?  How do we facilitate communication and interoperation across many both nested and intersecting publics without undermining contextual integrity?  What is the most efficient way to scale digital certificates on the internet today? What identification methods do we need so that members of a public can communicate securely among each other? What lessons can be drawn from the digital rights management ecosystem to the development of publics today?

PTC understands the urgency of answering these questions as GFMs continue to improve and spread beyond boundaries where their use can be policed by responsible actors. The PTC will continue to embrace this generational challenge and welcome every partner willing to join us in confronting it.