{"id":896793,"date":"2022-11-17T10:00:00","date_gmt":"2022-11-17T18:00:00","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/?p=896793"},"modified":"2022-11-18T12:21:06","modified_gmt":"2022-11-18T20:21:06","slug":"research-trends-in-privacy-security-and-cryptography","status":"publish","type":"post","link":"https:\/\/www.microsoft.com\/en-us\/research\/blog\/research-trends-in-privacy-security-and-cryptography\/","title":{"rendered":"Research trends in privacy, security and cryptography"},"content":{"rendered":"\n
\"The<\/figure>\n\n\n\n

Trust is essential for people and organizations to use technology with confidence. At Microsoft, we strive to earn the trust<\/a> of our customers, employees, communities, and partners by committing to privacy, security, the responsible use of AI, and transparency.<\/p>\n\n\n\n

At Microsoft Research, we take on this challenge by creating and using state-of-the-art tools and technologies that support a proactive, integrated approach to security across all layers of the digital estate.<\/p>\n\n\n\n

Threats to cybersecurity are constant and they continue to grow, impacting organizations and individuals everywhere. Attack tools are readily available and well-funded adversaries now have the capability to cause unprecedented harm. These threats help explain why U.S. President Joe Biden issued an executive order in 2021 calling for cybersecurity improvements. Similarly, the European Union recently called for stronger protection (opens in new tab)<\/span><\/a> of its information and communication technology (ICT) supply chains.<\/p>\n\n\n\n

Against that backdrop, Microsoft Research is focused on what comes next in security and privacy. New and emerging computing frontiers, like the metaverse and web3, will require consistent advances in identity, transparency and other security principles (opens in new tab)<\/span><\/a>, in order to learn from the past and unlock these technologies\u2019 potential. Developments in quantum computing and advances in machine learning and artificial intelligence offer great potential to advance science and the human condition. Our research aims to ensure that future breakthroughs come with robust safety and privacy protections, even as they accelerate profound changes and new business opportunities.<\/p>\n\n\n\n

At Microsoft Research, we pursue ambitious projects to improve the privacy and security of everyone on the planet. This is the first blog post in a series exploring the work we do in privacy, security and cryptography. In future installments, we will dive deeper into the research challenges we are addressing, and the opportunities we see.<\/p>\n\n\n\n\t

\n\t\t\n\n\t\t

\n\t\tMicrosoft Research Blog<\/span>\n\t<\/p>\n\t\n\t

\n\t\t\t\t\t\t
\n\t\t\t\t\n\t\t\t\t\t\"satellite\n\t\t\t\t<\/a>\n\t\t\t<\/div>\n\t\t\t\n\t\t\t
\n\n\t\t\t\t\t\t\t\t\t

Introducing Aurora: The first large-scale foundation model of the atmosphere<\/h2>\n\t\t\t\t\n\t\t\t\t\t\t\t\t

Aurora, a new AI foundation model from Microsoft Research, can transform our ability to predict and mitigate extreme weather events and the effects of climate change by enabling faster and more accurate weather forecasts than ever before.<\/p>\n\t\t\t\t\n\t\t\t\t\t\t\t\t

\n\t\t\t\t\t
\n\t\t\t\t\t\t\n\t\t\t\t\t\t\tLearn more\t\t\t\t\t\t<\/a>\n\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t\t\t<\/div>\n\t<\/div>\n\t<\/div>\n\t\n\n\n

Digital identities<\/h2>\n\n\n\n

While the internet was not originally built with an identity layer, digital identities have grown to become foundational elements of today\u2019s web and impact people\u2019s lives even beyond the digital world. Our research is aimed at modernizing digital identities and building more robust, usable, private and secure user-centric identity systems, putting each of us in control of our own digital identities.<\/p>\n\n\n\n

This work includes researching cryptographic algorithms that enable privacy-preserving open-source user-centric identity systems. Such systems would let people present cryptographically signed electronic claims and selectively choose which information they wish to disclose, while preventing tracking of people between presentations of the claim. Our approach would preserve an individual\u2019s privacy and work with existing web protocols to provide easy and safe access to a wide range of resources and activities.<\/p>\n\n\n\n

Our research also includes investigating innovative ways for people to manage their identity secrets reliably and safely without having to provide any centralized party with full access to them. Success in this area will also require scalable and verifiable methods to distribute identity public keys, so people can know who exactly they are interacting with.<\/p>\n\n\n\n

Media provenance and authenticity <\/h2>\n\n\n\n

Advances in graphics and machine learning algorithms have enabled the creation of easy-to-use tools (opens in new tab)<\/span><\/a> for editing. While useful in many ways, this technology has also enabled fraud and manipulation of digital images and media \u2013 or deepfakes. Early fakes were easy to spot, but current versions are becoming nearly impossible for machines or people to detect. The potential proliferation of fakes that are indistinguishable from reality undermines society\u2019s trust in everything we see and hear.<\/p>\n\n\n\n

Rather than trying to detect fakes, Microsoft Research has developed technology to determine the source of any digital media and whether it has been altered. We do this by adding digitally signed manifests to video, audio or images. The source of these media objects might be well-known news organizations (opens in new tab)<\/span><\/a>, governments or even individuals using apps (opens in new tab)<\/span><\/a> on mobile devices. <\/p>\n\n\n\n

Since media creation, distribution, and consumption are complex and involve many industries, Microsoft has helped standards organization (opens in new tab)<\/span><\/a> to stipulate how these signatures are added to media objects. We are also working with news organizations such as the BBC, New York Times, and CBC (opens in new tab)<\/span><\/a> to promote media provenance as a mitigation for misinformation on social media networks.\u00a0<\/p>\n\n\n\n

Hardware security foundations <\/h2>\n\n\n\n

To promote cyber-resilience, we are developing systems which can detect a cyberattack and safely shut down protecting data and blocking the attacker. The systems are designed to be repaired quickly and securely, if compromised. These systems are built with simple hardware features that provide very high levels of protection for repair and recovery modules. To enable reliable detection of compromised systems, we are also developing storage features that can be used to protect security event logs. This makes it harder for attackers to cover their tracks.<\/p>\n\n\n\n

Security analytics <\/h2>\n\n\n\n

Modern-day computers and networks are under constant attack by hackers of all kinds. In this seemingly never-ending cat-and-mouse contest, securing and defending today\u2019s global systems is a multi-billion-dollar enterprise. Managing the massive quantities of security data collected is increasingly challenging, which creates an urgent need for disruptive innovation in security analytics. <\/p>\n\n\n\n

We are investigating a transformer-based approach to modeling and analyzing large-scale security data. Applying and tuning such models is a novel field of study that could change the game for security analytics.<\/p>\n\n\n\n

Privacy-preserving machine learning<\/h2>\n\n\n\n

A privacy-preserving AI system should generalize so well that its behavior reveals no personal or sensitive details that may have been contained in the original data on which it was trained.<\/p>\n\n\n\n

How close can we get to this ideal? Differential privacy can enable analysts to extract useful insights from datasets containing personal information even while strengthening privacy protections. This method introduces \u201cstatistical noise.\u201d The noise is significant enough that AI models are prevented from compromising the privacy of any individual, but still provide accurate, useful research findings. Our recent results show that large language models can be particularly effective differentially private learners.<\/p>\n\n\n\n

Another approach, federated learning, enables large models to be trained and fine-tuned on customers’ own devices to protect the privacy of their data, and to respect data boundaries and data-handling policies. At Microsoft Research, we are creating an orchestration infrastructure<\/a> for developers to deploy cross-platform, cross-device federated learning solutions.<\/p>\n\n\n\n

Protecting data in training or fine-tuning is just one piece of the puzzle. Whenever AI is used in a personalized context, it may unintentionally leak information about the target of the personalization. Therefore, we must be able to describe the threat model for a complete deployment of a system with AI components, rather than just a single part of it.<\/p>\n\n\n\n

Read more about our work on these and other related topics in an earlier blog post<\/a>.<\/p>\n\n\n\n

Confidential computing<\/h2>\n\n\n\n

Confidential computing (opens in new tab)<\/span><\/a> has emerged as a practical solution to securing compute workloads in cloud environments, even from malicious cloud administrators. Azure already offers (opens in new tab)<\/span><\/a> confidential computing environments in multiple regions, leveraging Trusted Execution Environments<\/em> (TEEs) available in multiple hardware platforms (opens in new tab)<\/span><\/a>.<\/p>\n\n\n\n

Imagine if all computation were taking place in TEEs, where services would be able to access sensitive data only after they had been attested<\/em> to perform specific tasks. This is not practical today and much research remains to be done. For example, there are no formal standards to even describe what a TEE is, what kind of programming interface a TEE cloud should have, or how different TEEs should interact.<\/p>\n\n\n\n

Additionally, it is important to continuously improve the security guarantees of TEEs. For instance, understanding which side-channel attacks are truly realistic and developing countermeasures remains a major topic for research. Furthermore, we need to continue researching designs for confidential databases, confidential ledgers and confidential storage. Finally, even if we build both confidential computing and storage environments, how can we establish trust in the code that we want to run? As a cloud provider, our customers expect us to work continuously on improving the security of our infrastructure and the services that run on it.<\/p>\n\n\n\n

Secure-by-design cloud<\/h2>\n\n\n\n

In the future, we can imagine Azure customers compiling their software for special hardware with memory tagging capabilities, eliminating problems like buffer overflows for good. To detect compromise, VM memory snapshots could be inspected and studied with AI-powered tools. In the worst case, system security could always be bootstrapped from a minimal hardware root of trust. At Microsoft Research, we are taking a step further and asking how we can build the cloud from the ground up, with security in mind.<\/p>\n\n\n\n

New cryptography<\/h2>\n\n\n\n

The advance of quantum computing presents many exciting potential opportunities. As a leader in both quantum computing development and cryptographic research, Microsoft has a responsibility to ensure that the groundbreaking innovations on the horizon don\u2019t compromise classical (non-quantum) computing systems and information. Working across Microsoft, we are learning more about the weaknesses of classical cryptography and how to build new cryptographic systems strong enough to resist future attacks.<\/p>\n\n\n\n

Our active participation in the National Institute of Standards and Technology (NIST) Post-Quantum Cryptography projects has allowed Microsoft Research to examine deeply how the change to quantum-resistant algorithms will impact Microsoft services and Microsoft customers. With over seven years of work in this area, Microsoft Research\u2019s leadership in quantum cryptography will help customers prepare for the upcoming change of cryptographic algorithms.<\/p>\n\n\n\n

We\u2019ve joined with the University of Waterloo and others to build a platform for experimenting (opens in new tab)<\/span><\/a> with the newly proposed cryptographic systems and applying them to real-world protocols and scenarios. We\u2019ve implemented real-world tests of post-quantum cryptography, to learn how these new systems will work at scale and how we can deploy them quickly to protect network tunnels. Our specialized hardware implementations and cryptanalysis provide feedback to the new cryptosystems, which improves their performance, making post-quantum cryptosystems smaller and stronger.<\/p>\n\n\n\n

ElectionGuard<\/h2>\n\n\n\n
\n\t