{"id":928233,"date":"2023-03-19T06:33:52","date_gmt":"2023-03-19T13:33:52","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/?post_type=msr-blog-post&p=928233"},"modified":"2023-03-19T06:33:54","modified_gmt":"2023-03-19T13:33:54","slug":"plural-publics","status":"publish","type":"msr-blog-post","link":"https:\/\/www.microsoft.com\/en-us\/research\/articles\/plural-publics\/","title":{"rendered":"Plural Publics"},"content":{"rendered":"\n

Shrey Jain, Karen Easterbrook and E. Glen Weyl<\/p>\n\n\n\n

Generative foundation models (opens in new tab)<\/span><\/a> (GFMs) like GPT-4 will transform human productivity.  Yet the astonishing capabilities of these models will also strain many social institutions, including the ways we communicate and make sense of the world together.  One critical example is the way that human communication depends on social and community context. \u201c<\/strong>Context\u201d is used in a variety of ways, so we will aim to be precise about what we use it to mean: background data that is roughly \u201ccommon knowledge\u201d (known, known to be known, known to be known to be known, etc.).<\/p>\n\n\n\n

As Nancy Baym (opens in new tab)<\/span><\/a> and danah boyd (opens in new tab)<\/span><\/a> of the Sociotechnical Systems Research Lab (opens in new tab)<\/span><\/a> at Microsoft Research have studied, the applications built on top of the internet have already made it challenging to establish and preserve context for two reasons. First, as we talk to a wider range of people with less shared experience, it is increasingly hard to know what context we may assume in communicating. Secondly, the speed and ease with which information moves has made it hard to ensure that statements remain within the context in which they were made, often leading to \u201ccontext collapse (opens in new tab)<\/span><\/a>,\u201d causing different audiences to misunderstand or misjudge the information based on their disparate contexts.<\/p>\n\n\n\n

GFMs risk dramatically exacerbating the challenges of constructing and maintaining context because they make the possibilities for informational spread so much richer.  Soon most of the information we consume may come from global foundation models (rather than from specific organizations with known audiences) and information we share can be carried out of context, but used to train models that recombine that information in ways we are only beginning to understand.  One particularly extreme example is highlighted by (opens in new tab)<\/span><\/a> recent use of related models to synthesize a family member\u2019s voice in a spear phishing\/confidence attack, showing that taken to an extreme, decontextualization and recombination of snippets of content can threaten the very integrity of personhood. As GFMs and synthetic attacks like these become common, we expect the use of shared secrets to become increasingly important for authentication, making clear the critical importance of establishing and defending contexts.<\/p>\n\n\n\n

Plural Publics<\/h4>\n\n\n\n

We recently published a paper titled \u201dPlural Publics (opens in new tab)<\/span><\/a>\u201d\u00a0with our partners at the GETTING-Plurality (opens in new tab)<\/span><\/a> project of the Harvard Edmond & Lily Safra Centre for Ethics. This paper argues for moving beyond the dichotomy between \u201cpublic\u201d and \u201cprivate\u201d information to seek technical designs that foster \u201cplural publics\u201d, the proliferation of firmly established and strongly protected contexts for communication.\u00a0 We discuss some technologies, leveraging tools of both \u201cpublicity\u201d and \u201cprivacy\u201d, that may help achieve these goals, including group-chat messaging, deniable and disappearing messages, distributed ledgers, identity certificates, and end-to end encryption. \u00a0In the coming months, the PTC will work to build integrated suites of tools to help better achieve these goals, building off our pre-existing, open-source research on systems like U-Prove, which enables cryptographically verifiable attributes that can prove identity, including pseudonymous identities, while protecting privacy and preventing user tracking.<\/p>\n\n\n\n

Yet these explorations are just the beginning of work needed in this area and open questions abound. What is the social interface between what a machine knows and what a human knows? To what extent and in what cases is deniability sufficient for preserving context?  When, and in what amount, is common knowledge necessary or desired as humans begin to communicate alongside artificial agents? Can we measure the necessary context needed to enter a public?  How do we facilitate communication and interoperation across many both nested and intersecting publics without undermining contextual integrity?  What is the most efficient way to scale digital certificates on the internet today? What identification methods do we need so that members of a public can communicate securely among each other? What lessons can be drawn from the digital rights management ecosystem to the development of publics today?<\/a><\/p>\n\n\n\n

PTC understands the urgency of answering these questions as GFMs continue to improve and spread beyond boundaries where their use can be policed by responsible actors. The PTC will continue to embrace this generational challenge and welcome every partner willing to join us in confronting it.<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"

Shrey Jain, Karen Easterbrook and E. Glen Weyl Generative foundation models (opens in new tab) (GFMs) like GPT-4 will transform human productivity.  Yet the astonishing capabilities of these models will also strain many social institutions, including the ways we communicate and make sense of the world together.  One critical example is the way that human […]<\/p>\n","protected":false},"author":42453,"featured_media":0,"template":"","meta":{"msr-url-field":"","msr-podcast-episode":"","msrModifiedDate":"","msrModifiedDateEnabled":false,"ep_exclude_from_search":false,"_classifai_error":"","msr-content-parent":901101,"footnotes":""},"research-area":[],"msr-locale":[268875],"msr-post-option":[],"class_list":["post-928233","msr-blog-post","type-msr-blog-post","status-publish","hentry","msr-locale-en_us"],"msr_assoc_parent":{"id":901101,"type":"group"},"_links":{"self":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-blog-post\/928233"}],"collection":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-blog-post"}],"about":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/types\/msr-blog-post"}],"author":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/users\/42453"}],"version-history":[{"count":3,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-blog-post\/928233\/revisions"}],"predecessor-version":[{"id":928296,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-blog-post\/928233\/revisions\/928296"}],"wp:attachment":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media?parent=928233"}],"wp:term":[{"taxonomy":"msr-research-area","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/research-area?post=928233"},{"taxonomy":"msr-locale","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-locale?post=928233"},{"taxonomy":"msr-post-option","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-post-option?post=928233"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}