{"id":964494,"date":"2023-09-05T09:00:00","date_gmt":"2023-09-05T16:00:00","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/?p=964494"},"modified":"2023-10-04T06:22:26","modified_gmt":"2023-10-04T13:22:26","slug":"rethinking-trust-in-direct-messages-in-the-ai-era","status":"publish","type":"post","link":"https:\/\/www.microsoft.com\/en-us\/research\/blog\/rethinking-trust-in-direct-messages-in-the-ai-era\/","title":{"rendered":"Rethinking trust in direct messages in the AI era"},"content":{"rendered":"\n
\"Rethinking<\/figure>\n\n\n\n

This blog post is a part of a series exploring our research in privacy, security, and cryptography. For the previous post, see https:\/\/www.microsoft.com\/en-us\/research\/blog\/research-trends-in-privacy-security-and-cryptography<\/a>. While AI has the potential to massively increase productivity, this power can be used equally well for malicious purposes, for example, to automate the creation of sophisticated scam messages. In this post, we explore threats AI can pose for online communication ecosystems and outline a high-level approach to mitigating these threats.<\/p>\n\n\n\n

Communication in the age of AI<\/h2>\n\n\n\n

Concerns regarding the influence of AI on the integrity of online communication are increasingly shared by policymakers, AI researchers, business leaders, and other individuals. These concerns are well-founded, as benign AI chatbots can be easily repurposed to impersonate people, help spread misinformation, and sway both public opinion and personal beliefs. So-called \u201cspear phishing\u201d attacks, which are personalized to the target, have proved devastatingly effective. This is particularly true if victims are not using multifactor authentication, meaning an attacker who steals their login credentials with a phishing mail could access authentic services with those credentials. This opportunity has not been missed by organized cybercrime; AI-powered tools marketed to scammers and fraudsters are already emerging. This is disturbing, because democratic systems, business integrity, and interpersonal relationships all hinge on credible and effective communication\u2014a process that has notably migrated to the digital sphere.<\/p>\n\n\n\n

As we enter a world where people increasingly interact with artificial agents, it is critical to acknowledge that these challenges from generative AI are not merely hypothetical. In the context of our product offerings at Microsoft, they materialize as genuine threats that we are actively addressing. <\/a>We are beginning to witness the impact of AI in generating highly specific types of text (emails, reports, scripts, code) in a personalized, automated, and scalable manner. In the workplace, AI-powered tools are expected to bring about a huge increase in productivity, allowing people to focus on the more creative parts of their work rather than tedious, repetitive details. In addition, AI-powered tools can improve productivity and communication for people with disabilities or among people who do not speak the same language.  <\/p>\n\n\n\n

In this blog post, we focus on the challenge of establishing trust and accountability in direct communication (between two people), such as email, direct messages on social media platforms, SMS, and even phone calls. In all these scenarios, messaging commonly takes place between individuals who share little or no prior context or connection, yet those messages may carry information of high importance. Some examples include emails discussing job prospects, new connections from mutual friends, and unsolicited but important phone calls. The communication may be initiated on behalf of an organization or an individual, but in either case we encounter the same problem: if the message proves to be misleading, malicious, or otherwise inappropriate, holding anyone accountable for it is impractical, may require difficult and slow legal procedures, and does not extend across different communication platforms. <\/p>\n\n\n\n

As the scale of these activities increases, there is also a growing need for a flexible cross-platform accountability mechanism<\/em> that allows both the message sender and receiver to explicitly declare the nature of their communication. Concretely, the sender should be able to declare accountability for their message and the receiver should be able to hold the sender accountable if the message is inappropriate.<\/p>\n\n\n\n

Elements of accountability <\/h2>\n\n\n\n

The problems outlined above are not exactly new, but recent advances in AI have made them more urgent. Over the past several years, the tech community, alongside media organizations and others, have investigated ways to distinguish whether text or images are created by AI; for example, C2PA is a type of watermarking technology, and one possible solution among others. With AI-powered tools increasingly being used in the workplace, Microsoft believes that it will take a combination of approaches to provide the highest value and most transparency to users. <\/p>\n\n\n\n

Focusing on accountability is one such approach. We can start by listing some properties we expect of any workable solution:<\/p>\n\n\n\n