Personhood credentials: Artificial intelligence and the value of privacy-preserving tools to distinguish who is real online

  • Steven Adler ,
  • Zoë Hitzig ,
  • ,
  • Catherine Brewer ,
  • Wayne Chang ,
  • Renée DiResta ,
  • Eddy Lazzarin ,
  • Sean McGregor ,
  • Wendy Seltzer ,
  • Divya Siddarth ,
  • Nouran Soliman ,
  • Tobin South ,
  • Connor Spelliscy ,
  • Manu Sporny ,
  • Varya Srivastava ,
  • John Bailey ,
  • Brian Christian ,
  • Andrew Critch ,
  • Ronnie Falcon ,
  • Heather Flanagan ,
  • Kim Hamilton Duffy ,
  • Eric Ho ,
  • Claire R. Leibowicz ,
  • Srikanth Nadhamuni ,
  • Alan Z. Rozenshtein ,
  • David Schnurr ,
  • Evan Shapiro ,
  • Lacey Strahm ,
  • Andrew Trask ,
  • Zoe Weinberg ,
  • Cedric Whitney ,
  • Tom Zick

Anonymity is an important principle online. However, malicious actors have long used misleading identities to conduct fraud, spread disinformation, and carry out other deceptive schemes. With the advent of increasingly capable AI, bad actors can amplify the potential scale and effectiveness of their operations, intensifying the challenge of balancing anonymity and trustworthiness online. In this paper, we analyze the value of a new tool to address this challenge: «personhood credentials» (PHCs), digital credentials that empower users to demonstrate that they are real people — not AIs — to online services, without disclosing any personal information. Such credentials can be issued by a range of trusted institutions — governments or otherwise. A PHC system, according to our definition, could be local or global, and does not need to be biometrics-based. Two trends in AI contribute to the urgency of the challenge: AI’s increasing indistinguishability (i.e., lifelike content and avatars, agentic activity) from people online, and AI’s increasing scalability (i.e., cost-effectiveness, accessibility). Drawing on a long history of research into anonymous credentials and «proof-of-personhood» systems, personhood credentials give people a way to signal their trustworthiness on online platforms, and offer service providers new tools for reducing misuse by bad actors. In contrast, existing countermeasures to automated deception — such as CAPTCHAs — are inadequate against sophisticated AI, while stringent identity verification solutions are insufficiently private for many use-cases. After surveying the benefits of personhood credentials, we also examine deployment risks and design challenges. We conclude with actionable next steps for policymakers, technologists, and standards bodies to consider in consultation with the public.