{"id":576684,"date":"2019-04-16T08:45:11","date_gmt":"2019-04-16T15:45:11","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/?post_type=msr-event&p=576684"},"modified":"2024-03-15T09:01:15","modified_gmt":"2024-03-15T16:01:15","slug":"microsoft-research-redmond-cryptography-colloquium","status":"publish","type":"msr-event","link":"https:\/\/www.microsoft.com\/en-us\/research\/event\/microsoft-research-redmond-cryptography-colloquium\/","title":{"rendered":"Microsoft Research Redmond Cryptography and Privacy Colloquium"},"content":{"rendered":"\n\n\n\n\n

Microsoft Research Redmond Cryptography and Privacy Colloquium<\/h2>\n\n\n\n

The Cryptography and Privacy Research<\/a> group in Microsoft Research Redmond invites researchers from around the world to visit the group and speak in this colloquium series.<\/p>\n\n\n\n

In-person colloquium presentations take place at:<\/p>\n\n\n\n

Microsoft Building 99
14820 NE 36th Street
Redmond, WA 98052<\/p>\n\n\n\n

View map > (opens in new tab)<\/span><\/a><\/p>\n\n\n\n

<\/div>\n\n\n\n\n\n

Secure Inference of Large Language Models and large scale CNNs through Function Secret Sharing<\/h2>\n\n\n\n

Nishanth Chandran, Microsoft Research India<\/p>\n\n\n\n

March 21, 2024 | 9:00 AM | Virtual<\/p>\n\n\n\n

Abstract<\/h3>\n\n\n\n

Secure inference enables a model publisher to offer inference-as-a-service to a model consumer in such a way that the weights (or prompt) of the model are kept hidden from the consumer and the consumer’s input is kept hidden from the publisher. Cryptographically, this is realized through specialized secure two-party computation (2PC) protocols. A recent paradigm for 2PC protocols (in the preprocessing model) have emerged using the technique of function secret sharing. These techniques shift the overheads in 2PC from communication to computation. In this talk, we will cover these recent techniques that have enabled secure inference of ImageNet scale CNNs as well as Large Language Models with only a small overhead over executing them in the clear.<\/p>\n\n\n\n

Biography<\/h3>\n\n\n\n

Nishanth Chandran is a Principal Researcher at Microsoft Research, India. His research interests are cryptography and security. Prior to joining MSRI, Nishanth was a Researcher at AT&T Labs, and before that he was a Post-doctoral Researcher at MSR Redmond. Nishanth is a recipient of the 2010 Chorafas Award for exceptional achievements in research and his research has received coverage in science journals and in the media at venues such as Nature and MIT Technology Review. He has published several papers in top computer science conferences and journals such as Crypto, Eurocrypt, IEEE S&P, CCS, STOC, FOCS, and so on. His work on position-based cryptography was selected as one of the top 3 works and invited to QIP 2011 as a plenary talk. Nishanth has served on the technical program committee of all the top cryptography conferences on several occasions and he holds many US Patents. Nishanth received his Ph.D. in Computer Science from UCLA, M.S. in Computer Science from UCLA, and B.E. in Computer Science and Engineering from Anna University (Hindustan College of Engineering), Chennai. Nishanth is also a top ranking All India Radio South Indian Classical Violinist and has performed at international venues such as the Hollywood Bowl, Los Angeles and the Madras Music Academy, Chennai.<\/p>\n\n\n\n

<\/div>\n\n\n\n

TrustRate: A Decentralized Platform for Hijack-Resistant Anonymous Reviews<\/h2>\n\n\n\n

Rohit Dwivedula, University of Texas – Austin<\/p>\n\n\n\n

April 5, 2024 | 10:30 AM | Virtual<\/p>\n\n\n\n

Abstract<\/h3>\n\n\n\n

Reviews and ratings by users form a central component in several widely used products today (e.g., product reviews, ranking content, etc.), but today’s platforms for managing such reviews are centralized, ad-hoc and vulnerable to various forms of tampering. TrustRate is an end-to-end decentralized, hijack-resistant platform for authentic, anonymous, tamper-proof reviews. With a prototype implementation and evaluation at the scale of thousands of nodes, we demonstrate the efficacy and performance of our platform, towards a new paradigm for building products based on trusted reviews by end users without having to trust a single organization that manages the reviews.\u00a0<\/p>\n\n\n\n

Biography<\/h3>\n\n\n\n

Rohit Dwivedula is a first year PhD student at the University of Texas – Austin, advised by Aditya Akella and Daehyeok Kim. Before that, he was a research fellow at Microsoft Research, India and worked on research problems in two areas: (1) systems + privacy and security, and (2) AI infrastructure.<\/p>\n\n\n\n

<\/div>\n\n\n\n

Opens in a new tab<\/span><\/p>\n\n\n\n\n\n\n\n

Can we cast a ballot as intended and be receipt free?<\/h2>\n\n\n\n

Olivier Pereira, Microsoft Research and UCLouvain<\/p>\n\n\n\n

February 22, 2024 | 10:30 AM | Microsoft Building 99<\/p>\n\n\n\n

Abstract<\/h3>\n\n\n\n

We explore the interaction between two important properties of ballot submission in elections: cast-as-intended verifiability and receipt-freeness. The first property, cast-as-intended verifiability, expresses that a ballot submission process should offer a way for the voters to verify that the ballots they submitted accurately reflect their vote intent, even if they used a corrupted device to prepare their ballot. The second property, receipt-freeness, expresses that the ballot submission process should not offer any evidence that a voter could use to demonstrate the content of her vote to a third party. These two properties have been abundantly studied in the past, but most of the time in separation.<\/p>\n\n\n\n

Our first result is negative: we demonstrate that, in the absence of a trusted authority, it is impossible to obtain a receipt-free voting protocol with cast-as-intended verifiability if the vote submission process is non-interactive.<\/p>\n\n\n\n

On the positive side, we demonstrate that, if a trusted voter registration authority is available, or if the ballot submission process can be made interactive, then cast-as-intended verifiability and receipt-freeness can be obtained together. This result is constructive, and we propose examples of ballot submission processes that achieve the desired properties in both settings. These submission processes are quite demanding, and it is an intriguing question to see if more convenient submission processes can be found.<\/p>\n\n\n\n

This talk is based on joint work with Henri Devillez, Thomas Peters and Quentin Yang.<\/p>\n\n\n\n

Biography<\/h3>\n\n\n\n

Olivier Pereira is a visiting scientist at Microsoft Research during the 2023-2024 academic year and a professor of cryptography at UCLouvain, Belgium. His research explores cryptographic protocols, including their design and analysis. He contributed to the design of several verifiable voting systems, including Helios and STAR-Vote, and to the analysis of several others, including the Swiss and Estonian Internet voting systems. He is now contributing to the design of the ElectionGuard SDK. He consulted on election technologies for various government and international institutions, in Europe and elsewhere. He is also interested in leakage resilient cryptography and anonymous communication networks.<\/p>\n\n\n\n

Opens in a new tab<\/span><\/h2>\n\n\n\n\n\n

Might I Get Pwned: A Second Generation Compromised Credential Checking Service<\/h2>\n\n\n\n

Bijeeta Pal, Snapchat<\/p>\n\n\n\n

Abstract<\/h3>\n\n\n\n

Credential stuffing attacks use stolen passwords to log into victim accounts. To defend against these attacks, recently deployed compromised credential checking (C3) services provide APIs that help users and companies check whether a username, password pair is exposed. These services however only check if the exact password is leaked, and therefore do not mitigate credential tweaking attacks \u2014 attempts to compromise a user account with variants of a user\u2019s leaked passwords. Recent work has shown credential tweaking attacks can compromise accounts quite effectively even when the credential stuffing countermeasures are in place. We initiate work on C3 services that protect users from credential tweaking attacks. The core underlying challenge is how to identify passwords that are similar to their leaked passwords while preserving honest clients\u2019 privacy and also preventing malicious clients from extracting breach data from the service. We formalize the problem and explore ways to measure password similarity that balance efficacy, performance, and security. Based on this study, we design \u201cMight I Get Pwned\u201d (MIGP), a new kind of breach alerting service. Our simulations show that MIGP reduces the efficacy of state-of-the-art 1000-guess credential tweaking attacks by 94%. MIGP preserves user privacy and limits potential exposure of sensitive breach entries. We show that the protocol is fast, with response time close to existing C3 services. We worked with Cloudflare to deploy MIGP in practice.<\/p>\n\n\n\n

Biography<\/h3>\n\n\n\n

Bijeeta Pal is a Privacy Engineer at Snapchat. Pal recently graduated with a Ph.D. in computer science from Cornell University.Her research interest lies in the application of cryptographic techniques to build secure, private, and scalable systems empowering users. Her recent project focused on designing a similarity-aware and privacy-preserving credential checking service, Might I Get Pwned, that warns users from selecting passwords similar to a breached password (deployed at Cloudflare). She is also a recipient of the 2021-22 J.P. Morgan Ph.D. Fellowship.<\/p>\n\n\n\n

Opens in a new tab<\/span><\/p>\n\n\n\n\n\n

Authentication for Augmented and Virtual Reality<\/h2>\n\n\n\n

Sophie Stephenson, U Wisconsin-Madison<\/p>\n\n\n\n

Wednesday, October 5, 2022 | 9:05 AM PT | remote talk<\/p>\n\n\n\n

Abstract<\/h3>\n\n\n\n

Augmented reality (AR) and virtual reality (VR) devices are emerging as prominent contenders to today\u2019s personal computers. As personal devices, users will use AR and VR to store and access their sensitive data and thus will need secure and usable ways to authenticate. Unfortunately, it is not yet clear which authentication methods are best-suited for these novel devices. In this talk, I\u2019ll share how we evaluated the state-of-the-art of authentication methods for AR and VR. First, through a survey of AR\/VR users, we defined 20 user- and developer-desired properties for any AR\/VR authentication method. We then used these properties to perform a comprehensive evaluation of AR\/VR authentication methods both proposed in literature and currently used in practice. Our work synthesizes the current state of authentication for AR\/VR devices and provides advice for designing and evaluating future authentication methods. (Link to paper (opens in new tab)<\/span><\/a>)<\/p>\n\n\n\n

Biography<\/h3>\n\n\n\n

Sophie Stephenson (opens in new tab)<\/span><\/a> (she\/her) is a PhD student at the University of Wisconsin\u2014Madison, advised by Rahul Chatterjee. Her work connects security & privacy and human-centered computing by investigating computer security for at-risk user populations. In particular, her recent work aims to understand and mitigate the use of IoT devices in intimate partner violence. Previously, she earned a Bachelor\u2019s degree in Mathematics at Vassar College.<\/p>\n\n\n\n

Opens in a new tab<\/span><\/p>\n\n\n\n\n\n

The Exact Security of BIP32 Wallets<\/h2>\n\n\n\n

Poulami Das, TU Darmstadt<\/p>\n\n\n\n

Wednesday, August 24, 2022 | 10:30 AM PT | hybrid talk<\/p>\n\n\n\n

Abstract<\/h3>\n\n\n\n

In many cryptocurrencies, the problem of key management has become one of the most fundamental security challenges. Typically, keys are kept in designated schemes called \u2018Wallets\u2019, whose main purpose is to store these keys securely. One such system is the BIP32 wallet (Bitcoin Improvement Proposal 32), which since its introduction in 2012 has been adopted by countless Bitcoin users and is one of the most frequently used wallet system today. Surprisingly, very little is known about the concrete security properties offered by this system. In this work, we propose the first formal analysis of the BIP32 system in its entirety and without any modification. Building on the recent work of Das et al. (CCS `19), we put forth a formal model for hierarchical deterministic wallet systems (such as BIP32) and give a security reduction in this model from the existential unforgeability of the ECDSA signature algorithm that is used in BIP32. We conclude by giving concrete security parameter estimates achieved by the BIP32 standard, and show that by moving to an alternative key derivation method we can achieve a tighter reduction offering an additional 20 bits of security (111 vs. 91 bits of security) at no additional costs.<\/p>\n\n\n\n

Biography<\/h3>\n\n\n\n

Poulami Das is a final year PhD student at TU Darmstadt, Germany, working under the supervision of Sebastian Faust. Her current research interests are advanced signature schemes, consensus protocols and password-authenticated protocols.<\/p>\n\n\n\n

Opens in a new tab<\/span><\/p>\n\n\n\n\n\n

Aligning Technical Data Privacy Protections with User Concerns: A Differential Privacy Case Study<\/h2>\n\n\n\n

Elissa Redmiles, Max Planck Institute<\/p>\n\n\n\n

Wednesday, July 20, 2022 | 9:05 AM PT | remote talk<\/p>\n\n\n\n

Abstract<\/h3>\n\n\n\n

People\u2019s privacy concerns significantly influence their adoption of, engagement with, and ability to benefit from technology. A number of technical data privacy protections (e.g., differential privacy, multi-party computation) have been developed and deployed to address these privacy concerns. However, there has been little effort to meaningfully explain the privacy guarantees offered by these approaches to the people they aim to protect, nor have we evaluated whether such approaches meet the privacy needs of the people whose data is being processed. Perhaps as a result, despite growing deployment of technical data privacy protections, people feel less control over their data than ever.<\/p>\n\n\n\n

In the work presented in this talk, we take a first step toward evaluating the alignment between technical privacy protections and people\u2019s privacy concerns. I will present the results of a rigorous survey-based investigation of: (1) whether users care about the protections afforded by differential privacy, and (2) whether they are therefore more willing to share their data with differentially private systems. I will conclude with a forward-look at future work on evaluating the alignment between end-user privacy concerns and other technical privacy protections.<\/p>\n\n\n\n

Biography<\/h3>\n\n\n\n

Dr. Elissa M. Redmiles is a faculty member and research group leader at the Max Planck Institute for Software Systems and a Visiting Scholar at the Berkman Klein Center for Internet & Society at Harvard University. She uses computational, economic, and social science methods to understand users\u2019 security, privacy, and online safety-related decision-making processes. Her work has been recognized with multiple paper awards at USENIX Security, ACM CCS and ACM CHI and has been featured in popular press publications such as the New York Times, Wall Street Journal, Scientific American, Rolling Stone, Wired, Business Insider, and CNET.<\/p>\n\n\n\n

Opens in a new tab<\/span><\/p>\n\n\n\n\n\n

SNARKBlock: Federated Anonymous Blocklisting from Hidden Common Input Aggregate Proofs<\/h2>\n\n\n\n

Michael Rosenberg, University of Maryland<\/p>\n\n\n\n

Friday, July 8, 2022 | 9:05 AM PT | remote talk<\/p>\n\n\n\n

Abstract<\/h3>\n\n\n\n

Moderation is an essential tool to fight harassment and prevent spam. The use of strong user identities makes moderation easier, but trends towards strong identity pose serious privacy issues, especially when identities are linked across social media platforms. Zero-knowledge blocklists allow cross-platform blocking of users but, counter-intuitively, do not link users identities inter- or intra-platform, or to the fact they were blocked. Unfortunately, existing approaches (Tsang et al. \u201910), require that servers do work linear in the size of the blocklist for each verification of a non-membership proof. We design and implement SNARKBlock, a new protocol for zero-knowledge blocklisting with server-side verification that is logarithmic in the size of the blocklist. SnarkBlock is also the first approach to support ad-hoc, federated blocklisting: websites can mix and match their own blocklists from other blocklists and dynamically choose which identity providers they trust. Our paper can be found at eprint.iacr.org\/2021\/1577 (opens in new tab)<\/span><\/a>.<\/p>\n\n\n\n

Biography<\/h3>\n\n\n\n

Michael Rosenberg is a 4th year cryptography PhD student at the University of Maryland, living in NYC. His research interests include zero-knowledge proofs, fully homomorphic encryption, secure messaging, privacy-enhancing technologies, an accessibility. Other info about Michael, in order of importance: he is an avid birdwatcher, he can make a Thai curry in 10min flat, he has never done karaoke, and he blows about $35\/week on improv shows.<\/p>\n\n\n\n

Opens in a new tab<\/span><\/p>\n\n\n\n\n\n

Prio: Private, Robust, and Scalable Computation of Aggregate Statistics<\/h2>\n\n\n\n

Henry Corrigan-Gibbs, MIT<\/p>\n\n\n\n

Thursday, June 23, 2022 | 11 AM PT | hybrid talk<\/p>\n\n\n\n

Abstract<\/h3>\n\n\n\n

This talk will present Prio, a privacy-preserving system for the collection of aggregate statistics. Each Prio client holds a private data value (e.g., its current location), and a small set of servers compute statistical functions over the values of all clients (e.g., the most popular location). As long as at least one server is honest, the Prio servers learn nearly nothing about the clients\u2019 private data, except what they can infer from the aggregate statistics that the system computes. To protect functionality in the face of faulty or malicious clients, Prio uses zero-knowledge proofs on secret-shared data, a new cryptographic technique that yields a hundred-fold performance improvement over conventional zero-knowledge approaches. Prio extends classic private aggregation techniques to enable the collection of a large class of useful statistics. For example, Prio can perform a least-squares regression on high-dimensional client-provided data without ever seeing the data in the clear.<\/p>\n\n\n\n

Biography<\/h3>\n\n\n\n

Henry Corrigan-Gibbs (he\/him) is an assistant professor at MIT. Henry builds computer systems that provide strong security and privacy properties using ideas from cryptography, computer security, and computer systems. Henry completed his PhD in the Applied Cryptography Group at Stanford, where he was advised by Dan Boneh. After that, he was a postdoc in Bryan Ford\u2019s research group at EPFL in Lausanne, Switzerland.<\/p>\n\n\n\n

Henry has received an honorable mention for the 2020 ACM Doctoral Dissertation Award, three IACR Best Young Researcher Paper Awards (at Eurocrypt in 2020, the Theory of Cryptography Conference in 2019 and at Eurocrypt in 2018), the 2016 Caspar Bowden Award for Outstanding Research in Privacy Enhancing Technologies, and the 2015 IEEE Security and Privacy Distinguished Paper Award. Henry\u2019s work has been cited by IETF and NIST, and his Prio system for privacy-preserving telemetry data collection is used today in the Firefox web browser, Apple\u2019s iOS, and Google\u2019s Android operating system<\/p>\n\n\n\n

Opens in a new tab<\/span><\/p>\n\n\n\n\n\n

Anonymous tokens: single-use anonymous credentials for the web<\/h2>\n\n\n\n

Michele Orr\u00f9, UC Berkeley<\/p>\n\n\n\n

Friday, June 17, 2022 | 10:05 AM PT | remote talk<\/p>\n\n\n\n

Abstract<\/h3>\n\n\n\n

This talk is about anonymous tokens, a cryptographic primitive that enables a server to issue a client with lightweight, single-use credentials. Anonymous tokens satisfy two security properties: unforgeability and unlinkability. The former guarantees that no token can be double-spent, while the latter that the client can anonymously redeem the received tokens. We will focus on tokens with public and private metadata and how these affect security guarantees. We will discuss how they are used to limit tracking on the web.<\/p>\n\n\n\n

Biography<\/h3>\n\n\n\n

Michele a postdoctoral researcher at UC Berkeley. He helped Google develop Trust Tokens for removing third-party cookies form the browser. His research interests include cryptography and privacy-enhancing technologies: he focuses on cryptographic protocols and implementation, zero-knowledge proofs, confidential transactions, and multi-party computation. He holds a Ph.D. in Computer Science from \u00c9cole Normale Sup\u00e9rieure. In the past, He contributed to Python, Debian, and Tor. He helped build Globaleaks, an open-source whistleblowing platform.<\/p>\n\n\n\n

Opens in a new tab<\/span><\/p>\n\n\n\n\n\n

Interoperable Private Attribution<\/h2>\n\n\n\n

Erik Taubeneck and Daniel Masny, Meta<\/p>\n\n\n\n

Friday, June 10, 2022 | 11:05 AM PT | hybrid talk<\/p>\n\n\n\n

Abstract<\/h3>\n\n\n\n

Erik Taubeneck and Daniel Masny (both Research Scientists at Meta) will present Interoperable Private Attribution (IPA), a proposal in the W3C Private Ad Technology Community Group (PATCG). IPA is a proposal for attribution measurement, enabling advertisers and websites to understand the performance of digital advertising, in an aggregated and anonymous manner. We will present the motivating use cases, as well as the ideal functionality for the cryptographic protocol that purpose limits the system to aggregated and anonymous attribution measurement.<\/p>\n\n\n\n

Biography<\/h3>\n\n\n\n

Erik and Daniel are both Research Scientists at Meta, working on industry solutions, applying privacy enhancing technologies to digital advertising.<\/p>\n\n\n\n

Opens in a new tab<\/span><\/p>\n\n\n\n\n\n

Leakage and Protection of Dataset Properties<\/h2>\n\n\n\n

Olya Ohrimenko, The University of Melbourne<\/p>\n\n\n\n

Friday, May 20, 2022 | 9:05 AM PT | hybrid talk<\/p>\n\n\n\n

Abstract<\/h3>\n\n\n\n

Data privacy in computer science has been mostly concerned with protecting individual\u2019s data when releasing a result of a computation on a larger dataset (e.g., differential privacy). In this talk, I will depart from individual privacy and consider privacy of dataset properties (e.g., race or gender distribution in a dataset). First, I will show that global properties about dataset attributes can be leaked when one releases machine learning models computed on this data. Then, I will discuss definitions of privacy for dataset properties and describe mechanisms that can meet these definitions.<\/p>\n\n\n\n

This talk is based on joint work with Michelle Chen (The University of Melbourne), Rachel Cummings (Columbia University), Shruti Tople (Microsoft Research) and Wanrong Zhang (Harvard University).<\/p>\n\n\n\n

Biography<\/h3>\n\n\n\n

Olya Ohrimenko is an Associate Professor at The University of Melbourne which she joined in 2020. Prior to that she was a Principal Researcher at Microsoft Research in Cambridge, UK, where she started as a Postdoctoral Researcher in 2014. Her research interests include privacy and integrity of machine learning algorithms, data analysis tools and cloud computing, including topics such as differential privacy, verifiable and data-oblivious computation, trusted execution environments, side-channel attacks and mitigations. Recently Olya has worked with the Australian Bureau of Statistics and National Bank Australia. She has received solo and joint research grants from Facebook and Oracle and is currently a PI on an AUSMURI grant. Olya holds a Ph.D. degree from Brown University and a B.CS. (Hons) degree from the University of Melbourne. See https:\/\/people.eng.unimelb.edu.au\/oohrimenko\/ (opens in new tab)<\/span><\/a> for more information.<\/p>\n\n\n\n

Opens in a new tab<\/span><\/p>\n\n\n\n\n\n

Zero-Knowledge Middleboxes<\/h2>\n\n\n\n

Paul Grubbs, University of Michigan<\/p>\n\n\n\n

Friday, May 6, 2022 | 9:05 AM PT | remote talk<\/p>\n\n\n\n

Abstract<\/h3>\n\n\n\n

This talk will discuss a novel application of cryptography, the zero-knowledge middlebox. There is an inherent tension between ubiquitous encryption of network traffic and the ability of middleboxes to enforce network usage restrictions. An emerging battleground that epitomizes this tension is DNS filtering. Encrypted DNS (DNS-over-HTTPS and DNS-over-TLS) was recently rolled out by default in Firefox, with Google, Cloudflare, Quad9 and others running encrypted DNS resolvers. This is a major privacy win, protecting users from local network administrators observing which domains they are communicating with. However, administrators have traditionally filtered DNS to enforce network usage policies (e.g. blocking access to adult websites). Such filtering is legally required in many networks, such as US schools up to grade 12. As a result, Mozilla was forced to compromise, building a special flag for local administrators to instruct Firefox not to use Encrypted DNS.<\/p>\n\n\n\n

This example points to an open question of general importance, namely: can we resolve such tensions, enabling network policy enforcement while giving users the maximum possible privacy? Prior work has attempted to balance these goals by either revealing client traffic to trusted hardware run by the middlebox (e.g. Endbox) or using special searchable encryption protocols which enable some policy enforcement on encrypted traffic (e.g. Blindbox) by leaking information to the middlebox. Instead, we propose utilizing zero-knowledge proofs for clients to prove to middleboxes that their encrypted traffic is policy-compliant, without revealing any other additional information. Critically, such zero-knowledge middleboxes don\u2019t require trusted hardware or any modifications to existing TLS servers. We implemented a prototype of our protocol using Groth16 proofs which can prove statements about an encrypted TLS 1.3 connection such as \u201cthe domain being queried in this encrypted DNS packet is not a member of the specified blocklist.\u201d With current tools, our prototype adds around fifteen seconds of latency to opening a new TLS 1.3 connection, and at least three seconds to produce one proof of policy-compliance. While this is too slow for use with interactive web-browsing, it is close enough that we consider it a tantalizing target for future optimization.<\/p>\n\n\n\n

This talk will cover the tension between encryption and policy-enforcing middleboxes, including recent developments in Encrypted DNS and the necessity of DNS filtering. It will briefly survey existing solutions before presenting and arguing for the new zero-knowledge middlebox paradigm. Finally, the talk will describe our prototype implementation and several optimizations developed for it, as well as future avenues for improvement and open research questions. <\/strong>Our paper can be found at https:\/\/eprint.iacr.org\/2021\/1022 (opens in new tab)<\/span><\/a>.<\/p>\n\n\n\n

Biography<\/h3>\n\n\n\n

Paul Grubbs is an Assistant Professor at the University of Michigan. In his research in applied cryptography and security, he uses a wide array of theoretical and practical tools to help make computer systems more secure and private. He received his PhD from Cornell University, and did a postdoc at NYU. He has received an NSF Graduate Research Fellowship, a Cornell CS Dissertation Award, and a Distinguished Paper Award from Usenix Security.<\/p>\n\n\n\n

Opens in a new tab<\/span><\/p>\n\n\n\n\n\n

Aggregatability and asynchrony in distributed key generation<\/h2>\n\n\n\n

Sarah Meiklejohn, Google \/ University College London<\/p>\n\n\n\n

Tuesday, Mar 29, 2022 | 9:05 AM PT | remote talk<\/p>\n\n\n\n

Abstract<\/h3>\n\n\n\n

This talk will cover some recent research in distributed key generation (DKG) protocols, focusing on its definitions of security and on aggregatable DKG, in which the parties can produce an aggregated and publicly verifiable transcript. It will also explore the applications of DKG and, if time permits, how DKG can be achieved in asynchronous environments.<\/p>\n\n\n\n

Biography<\/h3>\n\n\n\n

Sarah Meiklejohn is a Professor in Cryptography and Security at University College London and a Research Scientist at Google, where she works on the Certificate Transparency team.<\/p>\n\n\n\n

Opens in a new tab<\/span><\/p>\n\n\n\n\n\n

Automated Attack Synthesis by Extracting Finite State Machines from Protocol Specification Documents<\/h2>\n\n\n\n

Maria Pacheco, Purdue University and Max von Hippel, Northeastern University<\/p>\n\n\n\n

Friday, Mar 18, 2022 | 9:05 AM PDT | remote talk<\/p>\n\n\n\n

Abstract<\/h3>\n\n\n\n

Automated attack discovery techniques, such as attacker synthesis or model-based fuzzing, provide powerful ways to ensure network protocols operate correctly and securely. Such techniques, in general, require a formal representation of the protocol, often in the form of a finite state machine (FSM). Unfortunately, many protocols are only described in English prose, and implementing even a simple network protocol as an FSM is time-consuming and prone to subtle logical errors. Automatically extracting protocol FSMs from documentation can significantly contribute to increased use of these techniques and result in more robust and secure protocol implementations.<\/p>\n\n\n\n

In this work we focus on attacker synthesis as a representative technique for protocol security, and on RFCs as a representative format for protocol prose description. Unlike other works that rely on rule-based approaches or use off-the-shelf NLP tools directly, we suggest a data-driven approach for extracting FSMs from RFC documents. Specifically, we use a hybrid approach consisting of three key steps: (1) large-scale word-representation learning for technical language, (2) focused zero-shot learning for mapping protocol text to a protocol-independent information language, and (3) rule-based mapping from protocol-independent information to a specific protocol FSM. We show the generalizability of our FSM extraction by using the RFCs for six different protocols: BGPv4, DCCP, LTP, PPTP, SCTP and TCP. We demonstrate how automated extraction of an FSM from an RFC can be applied to the synthesis of attacks, with TCP and DCCP as case-studies. Our approach shows that it is possible to automate attacker synthesis against protocols by using textual specifications such as RFCs.<\/p>\n\n\n\n

Biography<\/h3>\n\n\n\n

Maria Pacheco is a PhD Candidate in the Department of Computer Science at Purdue University. Her research focuses broadly on neural-symbolic methods to model natural language discourse scenarios. Before joining Purdue, she spent a couple of years working as a data scientist for various startups in her hometown of Caracas, Venezuela. She has published in top Natural Language Processing conferences and journals and has delivered tutorials on neural-symbolic modeling for NLP to diverse audiences, including an IJCAI \u201821 tutorial and an upcoming COLING \u201822 tutorial. Maria is a recipient of the 2021 Microsoft Research Dissertation Grant, and one of the main organizers of the LatinX in AI events in NLP.<\/p>\n\n\n\n

Max von Hippel is a 3rd year PhD student in the Khoury College of Computer Science at Northeastern University, advised by Dr. Cristina Nita-Rotaru.  His research focuses on the application of light-weight formal methods to protocol security, with a particular focus on the automatic discovery of vulnerabilities and attacks.  Max is an NSF GRFP Fellow, a Shelby Davis Scholar, and co-organizer of the Boston Computation Club.  He previously completed a Bachelor of Science in Pure Mathematics at the University of Arizona, with a Minor in Computer Science, and he grew up in Anchorage, Alaska.<\/p>\n\n\n\n

Opens in a new tab<\/span><\/p>\n\n\n\n\n\n

Differentially Private Resource Allocator<\/h2>\n\n\n\n

Joann Qiongna Chen, UC Irvine<\/p>\n\n\n\n

Friday, Mar 11, 2022 | 9:05 AM PDT | remote talk<\/p>\n\n\n\n

Abstract<\/h3>\n\n\n\n

Despite isolation techniques to enforce clients\u2019 privacy in resource-sharing systems, one kind of side-channel from the resource allocation (RA) is frequently ignored.  That is, an attacker can request all available resources and use the fulfillment ratio to determine the existence of other clients, which is considered as private information in many systems, such as metadata-private messengers. To defend against such attacks, not assigning anything achieves perfect privacy, yet the resources are totally wasted; on the other hand, any greedy resource allocation\/assignment method that allocates maximum possible resources is non-private in the worst case.<\/p>\n\n\n\n

We use the formal guarantee of differential privacy (DP) to balance privacy and utility in RA.  To satisfy DP in RA, the server (resource allocator) follows the noisy allocation paradigm by adding some noise to the received requests (e.g., adding dummy requests or deleting some requests), and subsequently randomly assigning resources to the noisy requests.  The intuition is that noisy requests incur some waste, but also confound attackers.<\/p>\n\n\n\n

Prior work draws a number of dummy requests from a biased Laplace distribution. As Laplace distribution satisfies DP, by a post-processing argument, this approach also satisfies DP.  In this case, we need a large positive bias to make sure the noise is always positive. But the bias also leads to poor utility. To provide better privacy-utility tradeoff in RA, first, we propose a new notion called Allocation DP (ADP), which follows the traditional DP notion, but can better model the process of RA. We then present a thorough study of the noisy allocation paradigm by considering different types, scales, and biases of noise.<\/p>\n\n\n\n

Biography<\/h3>\n\n\n\n

Joann Chen is a third-year Ph.D. student in the EECS department at UC Irvine, advised by Zhou Li. Her research interests center around Differential Privacy (DP), privacy-enhancing technologies, and privacy in machine learning. She has experience in working on differentially private DNS resolution, DP for data stream release, differentially private resource allocator, and quantifying privacy risks in machine learning.<\/p>\n\n\n\n

Opens in a new tab<\/span><\/p>\n\n\n\n\n\n

Account Recovery and Delegation in WebAuthn<\/h2>\n\n\n\n

Nick Frymann, University of Surrey<\/p>\n\n\n\n

Friday, Feb 11, 2022 | 9:05 AM PDT | remote talk<\/p>\n\n\n\n

Abstract<\/h3>\n\n\n\n

WebAuthn, forming part of FIDO2, is a W3C standard for strong authentication, which employs digital signatures to authenticate web users whilst preserving their privacy. Owned by users, WebAuthn authenticators generate attested and unlinkable public-key credentials for each web service to authenticate users. Since the loss of authenticators prevents users from accessing web services, usable recovery solutions preserving the original WebAuthn design choices and security objectives are urgently needed. Additionally, these properties introduce challenges when account owners want to delegate certain rights to a proxy user, such as to access their accounts or perform actions on their behalf, as delegation must not undermine decentralisation, unlinkability, and attestation provided by WebAuthn.<\/p>\n\n\n\n

We first analyse the cryptographic core of Yubico\u2019s recent recovery proposal by modelling a new primitive, called Asynchronous Remote Key Generation (ARKG), which allows some primary authenticator to generate unlinkable public keys for which the backup authenticator may later recover corresponding private keys. Both processes occur asynchronously without the need for authenticators to export or share secrets, adhering to WebAuthn\u2019s attestation requirements. We conclude our analysis by discussing concrete instantiations behind Yubico\u2019s ARKG protocol, its integration with the WebAuthn standard, performance, and usability aspects.<\/p>\n\n\n\n

We then discuss how this primitive may be extended for use in delegation. This gives two approaches, called remote and direct, to achieve credential-wise delegation in WebAuthn, whilst maintaining the standard\u2019s properties. In remote delegation, the account owner generates delegation credentials using ARKG and stores them at the relying party which provides interfaces to manage the delegation. The direct variant uses a delegation-by-warrant approach, through which the proxy receives delegation credentials from the account owner and presents them later to the relying party. To realise direct delegation, we introduce Proxy Signature with Unlinkable Warrants (PSUW), a new type of proxy signature that extends WebAuthn\u2019s unlinkability property to proxy users and can be constructed generically from ARKG.<\/p>\n\n\n\n

We also describe the implementation considerations of ARKG and both delegation approaches, along with their possible WebAuthn integration, including the extensions required for the CTAP and WebAuthn API. Our discussion extends to additional functionality, such as revocation and permissions management, as well as usability.<\/p>\n\n\n\n

Biography<\/h3>\n\n\n\n

Nick Frymann is a post-graduate researcher in the Surrey Centre for Cyber Security at the University of Surrey. His research focuses on multi-factor authentication on the web, with a view to improve security, functionality, and usability. Along with researchers in the SCCS group and partners from Wire and Yubico, Asynchronous Remote Key Generation was published in ACM CCS 2020, which offers more flexible and usable authentication backups in WebAuthn without undermining its security properties.<\/p>\n\n\n\n

Opens in a new tab<\/span><\/p>\n\n\n\n\n\n

Privacy Pass: Architecture and Protocol Deep Dive<\/h2>\n\n\n\n

Christopher Wood, Cloudflare<\/p>\n\n\n\n

Friday, Jan 28, 2022 | 9:05 AM PDT | remote talk<\/p>\n\n\n\n

Biography<\/h3>\n\n\n\n

Christopher Wood is a Research Lead at Cloudflare Research. Outside of Cloudflare, he is co-chair of the TLS and MASQUE working groups at the IETF, as well as the PEARG research group in the IRTF. Before joining Cloudflare, Christopher worked on transport security, privacy, and cryptography engineering at Apple, as well as future Internet architectures at Xerox PARC. His interests lay at the intersection of network protocol design, communications security, privacy, and applied cryptography. At Cloudflare, he leads projects focused on security and privacy enhancements to a variety of systems, protocols, and applications. Christopher holds a Ph.D. in computer science from UC Irvine.<\/p>\n\n\n\n

Opens in a new tab<\/span><\/p>\n\n\n\n\n\n

Mitigating Membership Inference Attacks Through Self-Distillation<\/h2>\n\n\n\n

Amir Houmansadr, University of Massachusetts Amherst<\/p>\n\n\n\n

Friday, Jan 14, 2022 | 9:05 AM PDT | remote talk<\/p>\n\n\n\n

Abstract<\/h3>\n\n\n\n

Membership inference attacks (MIA) are a key measure to evaluate privacy leakage in machine learning (ML) models. These attacks aim to distinguish training members from non-members by exploiting differential behavior of the models on member and non-member inputs. In this talk, I will start by introducing MIAs and their variations. Then, I will present SELENA, a novel architecture that aims to train ML models that have membership privacy while largely preserving their utility. SELENA relies on an ensemble architecture, and leverages self-distillation to train MIA-resistant models. Through extensive experiments on major benchmark datasets we show that SELENA presents a superior trade-off between membership privacy and utility compared to the state of the art.<\/p>\n\n\n\n

Biography<\/h3>\n\n\n\n

Amir Houmansadr is an associate professor of computer science at UMass Amherst. He received his Ph.D. from the University of Illinois at Urbana-Champaign in 2012, and spent two years at the University of Texas at Austin as a postdoctoral scholar. Amir is broadly interested in the security and privacy of networked systems. To that end, he designs and deploys privacy-enhancing technologies, analyzes network protocols and services (e.g., messaging apps and machine learning APIs) for privacy leakage, and performs theoretical analysis to derive bounds on privacy (e.g., using game theory and information theory). Amir has received several awards including an NSF CAREER Award in 2016, a Google Faculty Research Award in 2015, and the 2013 IEEE S&P Best Practical Paper Award.<\/p>\n\n\n\n

Opens in a new tab<\/span><\/p>\n\n\n\n\n\n

What is the Exact Security of the Signal Protocol?<\/h2>\n\n\n\n

Alexander Bienstock, New York University<\/p>\n\n\n\n

Friday, Dec 3, 2021 | 10:05 AM PDT | remote talk<\/p>\n\n\n\n

Abstract<\/h3>\n\n\n\n

In this work we develop comprehensive definitions in the Universal Composability framework to study the Signal Double Ratchet (Signal for short) protocol. Our definitions enable a more fine-grained and rigorous analysis of the exact security of Signal by explicitly capturing many new security guarantees, in addition to the ones that were already identified in the state-of-art work by Alwen, Coretti and Dodis [Eurocrypt 2019]. Moreover, our definitions provide the ability to more easily build on top of Signal, using the UC Composition Theorem. The Signal protocol, as it is described in the whitepaper, securely realizes our ideal functionality F_Signal. However, as we interpret from the high-level description in the whitepaper, the guarantees of F_Signal seem slightly weaker than those one would expect Signal to satisfy. Therefore we provide a stronger, more natural definition, formalized through the ideal functionality F_Signal+ . Based on our definitions we are able to make many important insights as follows:<\/p>\n\n\n\n