{"id":867756,"date":"2022-08-25T19:12:05","date_gmt":"2022-08-26T02:12:05","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/?post_type=msr-event&p=867756"},"modified":"2024-09-06T23:39:24","modified_gmt":"2024-09-07T06:39:24","slug":"msr-asia-theory-lecture-series","status":"publish","type":"msr-event","link":"https:\/\/www.microsoft.com\/en-us\/research\/event\/msr-asia-theory-lecture-series\/","title":{"rendered":"MSR Asia Theory Lecture Series"},"content":{"rendered":"\n\n\n\n\n

MSR Asia Theory Lecture Series is a forum where we invite researchers around the world to share the latest theoretical advances in big data, artificial intelligence, and related areas. The Lecture series are broadcast live over Teams. If you would like to receive the information about the upcoming talks, please send email \u201cSubscribe to the Lecture Series\u201d to MSRA.TheoryCenter@outlook.com<\/em><\/a> or subscribe to Wechat official account “\u5fae\u8f6f\u5b66\u672f\u5408\u4f5c\u201c. <\/p>\n\n\n\n

Lectures<\/h2>\n\n\n\n

8\/21\/2024:<\/strong> Regularization and Optimal Multiclass Learning, Shang-Hua Teng<\/p>\n\n\n\n\n\n

Abstract: <\/strong>The quintessential learning algorithm of empirical risk minimization (ERM) is known to fail in various settings for which uniform convergence does not characterize learning. Relatedly, the practice of machine learning is rife with considerably richer algorithmic techniques, perhaps the most notable of which is regularization. Nevertheless, no such technique or principle has broken away from the pack to characterize optimal learning in these more general settings. The purpose of this work is to precisely characterize the role of regularization in perhaps the simplest setting for which ERM fails: multiclass learning with arbitrary label sets. Using one-inclusion graphs (OIGs), we exhibit optimal learning algorithms that dovetail with tried-and-true algorithmic principles: Occam\u2019s Razor as embodied by structural risk minimization (SRM), the principle of maximum entropy, and Bayesian inference. We also extract from OIGs a combinatorial sequence we term the Hall complexity, which is the first to characterize a problem\u2019s transductive error rate exactly. Lastly, we introduce a generalization of OIGs and the transductive learning setting to the agnostic case, where we show that optimal orientations of Hamming graphs \u2013 judged using nodes\u2019 outdegrees minus a system of node-dependent credits \u2013 characterize optimal learners exactly. We demonstrate that an agnostic version of the Hall complexity again characterizes error rates exactly, and exhibit an optimal learner using maximum entropy programs.<\/p>\n\n\n\n

\"Shang-Hua<\/figure>\n\n\n\n

Bio:<\/strong> Shang-Hua Teng is a University Professor and Seely G. Mudd Professor of Computer Science and Mathematics at USC. He is a fellow of SIAM, ACM, and Alfred P. Sloan Foundation, and has twice won the G\u00f6del Prize, first in 2008, for developing smoothed analysis, and then in 2015, for designing the breakthrough scalable Laplacian solver. Citing him as, \u201cone of the most original theoretical computer scientists in the world\u201d, the Simons Foundation named him a 2014 Simons Investigator to pursue long-term curiosity-driven fundamental research. He also received the 2009 Fulkerson Prize,\u00a0 2023 Science & Technology Award for Overseas Chinese from the China Computer Federation, 2022 ACM SIGecom Test of Time Award (for settling the complexity of computing a Nash equilibrium), 2021 ACM STOC Test of Time Award (for smoothed analysis), 2020 Phi Kappa Phi Faculty Recognition Award (2020)\u00a0 for his book Scalable Algorithms for Data and Network Analysis, 2011 ACM STOC Best Paper Award (for improving maximum-flow minimum-cut algorithms). In addition, he and collaborators developed the first optimal well-shaped Delaunay mesh generation algorithms for arbitrary three-dimensional domains, settled the Rousseeuw-Hubert regression-depth conjecture in robust statistics, and resolved two long-standing complexity-theoretical questions regarding the Sprague-Grundy theorem in combinatorial game theory. For his industry work with Xerox, NASA, Intel, IBM, Akamai, and Microsoft, he received fifteen patents in areas including compiler optimization, Internet technology, and social networks. Dedicated to teaching his daughter to speak Chinese as the sole Chinese-speaking parent in an otherwise English-speaking family and environment, he has also become fascinated with children’s bilingual learning.<\/p>\n\n\n\n\n\n

4\/25\/2024:<\/strong> Toward Demystifying Grokking, Wei Hu<\/p>\n\n\n\n\n\n

Abstract: <\/strong>Grokking is a surprising phenomenon in which a neural network first memorizes the training set, resulting in perfect training accuracy but near-random test accuracy, and after training for sufficiently longer, it suddenly transitions to perfect test accuracy. I will talk about our recent work toward theoretically explaining the grokking phenomenon. First, we show that a dichotomy of early and late-phase implicit biases can provably induce grokking, and exemplify it in simplified settings such as sparse linear models and matrix completion. Second, we show that in a simple non-linear classification task, grokking also provably occurs and coincides with another intriguing phenomenon known as benign overfitting. <\/p>\n\n\n\n

\"Weihu\"<\/figure>\n\n\n\n

Bio:<\/strong> Wei Hu is an Assistant Professor in Computer Science and Engineering at the University of Michigan. He obtained his Ph.D. degree from Princeton University, where he was advised by Sanjeev Arora, and his Bachelor’s degree from Tsinghua University, where he was a member of Yao Class. His research interest is in the theoretical and scientific foundations of deep learning. He is a recipient of the Google Research Scholar award and the Siebel Scholarship.<\/p>\n\n\n\n

Slides<\/a><\/p>\n\n\n\n\n\n

2\/28\/2024:<\/strong> Feature learning of neural network by mean field Langevin dynamics: Optimization and generalization, Taiji Suzuki<\/p>\n\n\n\n\n\n

Abstract: <\/strong>In this talk, I will discuss the feature learning ability of neural networks from statistical and optimization perspectives. In particular, I will present recent developments of theory of the mean-field Langevin dynamics (MFLD) and its application to neural network training. MFLD is a nonlinear generalization of the gradient Langevin dynamics (GLD) that minimizes an entropy regularized convex function defined on the space of probability distributions, and it naturally arises from the optimization of two-layer neural networks via (noisy) gradient descent. In the first half, I will present the convergence result of MFLD and explain how the convergence of MFLD is connected to the duality gap through the log-Sobolev inequality of the so-called proximal Gibbs measure. In addition to that, the time-space discretization of MFLD will be addressed. It can be shown that the discretization error can be bounded uniformly in time unlike existing work. In the latter half, I will discuss the generalization error analysis of neural networks trained by MFLD. Addressing a binary classification problem, we have a general form of a test classification error bound that provides a fast learning rate based on a local Rademacher complexity analysis. By applying this general framework to the k-sparse parity problem, we demonstrate how the feature learning helps its sample complexity compared with the kernel methods. Finally, we also discuss how anisotropic structure of input will affect the sample complexity and computational complexity. If the data is well aligned to the target function, both sample and computational complexities are significantly mitigated.<\/p>\n\n\n\n

\"Photo<\/figure>\n\n\n\n

Bio:<\/strong> Taiji Suzuki is currently an Associate Professor in the Department of Mathematical Informatics at the University of Tokyo. He also serves as the team leader of \u201cDeep learning theory\u201d team in AIP-RIKEN. He received his Ph.D. degree in information science and technology from the University of Tokyo in 2009. He worked as an assistant professor in the department of mathematical informatics, the University of Tokyo between 2009 and 2013, and then he was an associate professor in the department of mathematical and computing science, Tokyo Institute of Technology between 2013 and 2017. He served as area chairs of premier conferences such as NeurIPS, ICML, ICLR and AISTATS, a program chair of ACML2019, and an action editor of the Annals of Statistics. He received the Outstanding Paper Award at ICLR in 2021, the MEXT Young Scientists\u2019 Prize, and Outstanding Achievement Award in 2017 from the Japan Statistical Society. He is interested in deep learning theory, nonparametric statistics, high dimensional statistics, and stochastic optimization. In particular, he is mainly working on deep learning theory from several aspects such as representation ability, generalization ability and optimization ability. He also has devoted stochastic optimization to accelerate large scale machine learning problems including variance reduction methods, Nesterov\u2019s acceleration, federated learning and non-convex noisy optimization.<\/p>\n\n\n\n\n\n

1\/25\/2024:<\/strong> Recent Advances in Coresets for Clustering, Shaofeng Jiang<\/p>\n\n\n\n\n\n

Abstract: <\/strong>Coreset is a popular data reduction technique. Roughly, a coreset is a tiny proxy of the dataset, such that the objective function evaluated on the coreset for every feasible solution approximates that on the original dataset. Coresets are particularly useful for dealing with big data since they can usually be constructed in sublinear models efficiently, including streaming and parallel computing.<\/p>\n\n\n\n

The study of coresets for clustering is very fruitful, and nearly tight bounds have recently been obtained for well-known problems such as k-median and k-means and their variants. In this talk, I will introduce the recent advances in coresets for clustering, with a focus on presenting several fundamental sampling techniques, including importance sampling and hierarchical uniform sampling, for the construction of coresets. I will conclude the talk by discussing future directions for the study of coreset (and beyond).<\/p>\n\n\n\n

\"a<\/figure>\n\n\n\n

Bio:<\/strong> Shaofeng Jiang is an assistant professor at Peking University. He obtained his PhD at the University of Hong Kong, and before he joined PKU, he worked as a postdoctoral researcher at the Weizmann Institute of Science, and an assistant professor at Aalto University. His research interest generally lies in theoretical computer science, with a focus on sublinear algorithms.<\/p>\n\n\n\n

Slides<\/a><\/p>\n\n\n\n\n\n

11\/28\/2023:<\/strong> Textbooks Are All You Need, Yin Tat Lee<\/p>\n\n\n\n\n\n

Abstract: <\/strong>Many believed that training large language models (LLMs) required using a vast dataset and an immense number of parameters. This is computationally demanding, requiring significant GPU resources. GPT-4 exemplified this belief, being a colossal model trained on a vast corpus.<\/p>\n\n\n\n

In light of this, we sought to discern if such impressive results could be achieved with smaller models and limited data for code generation. We demonstrate that with high-quality data, the demand for expansive datasets and a multitude of parameters lessens. The outcome was a few billion-size model, which not only met or exceeded the performance of existing open-source models but did so utilizing a mere 1\/1000th of compute in training. Moreover, we will discuss specific emergent properties observed in the model after its fine-tuning on coding exercises.<\/p>\n\n\n\n

\"a<\/figure>\n\n\n\n

Bio:<\/strong> Yin Tat Lee is a Principal Researcher at MSR and an Associate Professor of Paul G. Allen School of Computer Science & Engineering at the University of Washington. His research interests are convex optimization, convex geometry, graph algorithms, online algorithms, and differential privacy. During his career, he has received a variety of awards, including Best Paper Awards at FOCS, SODA and NeurIPS, Sprowls Award, NSF CAREER Award, A.W. Tucker Prize, Microsoft Research Faculty Fellowship, Sloan Research Fellowship, and Packard Fellowships.<\/p>\n\n\n\n\n\n

10\/30\/2023:<\/strong> Intelligent Heuristics Are the Future of Computing, Shang-Hua Teng<\/p>\n\n\n\n\n\n

Abstract: <\/strong>Back in 1988, the partial game trees explored by computer chess programs were among the largest search structures in real-world computing. Because the game tree is too large to be fully evaluated, chess programs must make heuristic strategic decisions based on partial information, making it an illustrative subject for teaching AI search. In one of his lectures that year on AI search for games and puzzles, Professor Hans Berliner \u2014 a pioneer of computer chess programs \u2014 stated: \u201cIntelligent heuristics are the future of computing.\u201d<\/p>\n\n\n\n

As a student in the field of the theory of computation, I was naturally perplexed but fascinated by this perspective. I had been trained to believe that \u201cAlgorithms and computational complexity theory are the foundation of computer science.\u201d However, as it happens, my attempts to understand heuristics in computing have subsequently played a significant role in my career as a theoretical computer scientist. I have come to realize that Berliner\u2019s postulation is a far-reaching worldview, particularly in the age of big, rich, complex, and multifaceted data and models, when computing has ubiquitous interactions with science, engineering, humanity, and society.  <\/p>\n\n\n\n

In this talk, I will share some of my experiences on the subject of heuristics in computing, presenting examples of theoretical attempts to understand the behavior of heuristics on real data, as well as efforts to design practical heuristics with desirable theoretical characterizations. My hope is that these theoretical insights from past heuristics \u2014 such as spectral partitioning, multilevel methods, evolutionary algorithms, and simplex methods \u2014 can shed light on and further inspire a deeper understanding of the current and future techniques in AI and data mining.<\/p>\n\n\n\n

\"Shang-Hua<\/figure>\n\n\n\n

Bio:<\/strong>  Shang-Hua Teng is a University Professor and Seely G. Mudd Professor of Computer Science and Mathematics at USC. He is a fellow of SIAM, ACM, and Alfred P. Sloan Foundation, and has twice won the G\u00f6del Prize, first in 2008, for developing smoothed analysis, and then in 2015, for designing the breakthrough scalable Laplacian solver. Citing him as, \u201cone of the most original theoretical computer scientists in the world\u201d, the Simons Foundation named him a 2014 Simons Investigator to pursue long-term curiosity-driven fundamental research. He also received the 2009 Fulkerson Prize, 2021 ACM STOC Test of Time Award (for smoothed analysis), 2022 ACM SIGecom Test of Time Award (for settling the complexity of computing a Nash equilibrium), 2011 ACM STOC Best Paper Award (for improving maximum-flow minimum-cut algorithms), and 2023 Science & Technology Award for Overseas Chinese (opens in new tab)<\/span><\/a> from the China Computer Federation. In addition, he and collaborators developed the first optimal well-shaped Delaunay mesh generation algorithms for arbitrary three-dimensional domains, settled the Rousseeuw-Hubert regression-depth conjecture in robust statistics, and resolved two long-standing complexity-theoretical questions regarding the Sprague-Grundy theorem in combinatorial game theory. For his industry work with Xerox, NASA, Intel, IBM, Akamai, and Microsoft, he received fifteen patents in areas including compiler optimization, Internet technology, and social networks. Dedicated to teaching his daughter to speak Chinese as the sole Chinese-speaking parent in an otherwise English-speaking family and environment, he has also become fascinated with children’s bilingual learning.<\/p>\n\n\n\n\n\n

10\/23\/2023:<\/strong> The mathematics of complex streamed data, Terry Lyons<\/p>\n\n\n\n\n\n

Abstract:<\/strong> Complex streams of evolving data are better understood by their effects on nonlinear systems than by their values at times. The question of which nonlinear systems would seem to be context dependent, but it is not. Core to rough path theory is a simple universal nonlinear system that captures all the information needed to predict any response to any nonlinear system. This idealized mathematical feature set is known as the signature of the stream. Its abstract simplicity opens the possibilities for understanding and working with streams in the same context free way that calculators work with numbers. Signature-based techniques offer simple to apply universal numerical methods that are robust to irregular data and efficient at representing the order of events and complex oscillatory data. Specific software can be developed and then applied across many contexts. Signatures underpin prize winning contributions in recognizing Chinese handwriting, in detecting sepsis, and in generating financial data, and most recently in the ability to score streams as outliers against a corpus of normal streams. This principled outlier technology has emerged as a powerful unifying technique; it identifies radio frequency interference in astronomical data, brain injury from meg data…. The underpinning theoretical contributions span a range from abstract algebra and non-commutative analysis to questions of organization of efficient numerical calculation. See www.datasig.ac.uk\/ (opens in new tab)<\/span><\/a>. New hyperbolic partial differential equations have been developed that compute the \u201csignature kernel\u201d trick without ever having to introduce signatures. Neural controlled differential equations can directly harness approaches such as the log ode method and consume the control as a rough path.<\/p>\n\n\n\n

\"a<\/figure>\n\n\n\n

Bio:<\/strong> <\/p>\n\n\n\n

Professor Terry Lyons is Wallis Professor Emeritus and Professor of Mathematics at the University of Oxford. He is currently PI of the DataS\u0131g program (primarily funded by EPSRC), and of the complementary research programme CIMDA-Oxford (under the support of InnoHK and the HKSAR). He was a founding member (2007) of, and then Director (2011-2015) of, the Oxford Man Institute of Quantitative Finance; he was the Director of the Wales Institute of Mathematical and Computational Sciences (WIMCS; 2008-2011). He came to Oxford in 2000 having previously been Professor of Mathematics at Imperial College London (1993-2000), and before that he held the Colin Maclaurin Chair at Edinburgh (1985-93). He was President of the London Mathematical Society (2013-15).<\/p>\n\n\n\n

Professor Lyons\u2019s long-term research interests are focused on the mathematics of streamed data and building strong applications from these mathematical insights. His current goal is to use rough path theory to develop innovative and truly generic tools for working with streamed data and make these widely accessible through the python package RoughPy. One example of this synergy comes from the signature of a stream. Signatures underpin prize winning contributions in recognizing Chinese handwriting, in detecting sepsis, and in generating financial data, and most recently in the ability to score streams as outliers against a corpus of normal streams. This principled outlier technology has emerged as a powerful unifying technique; it identifies radio frequency interference in astronomical data, brain injury from meg data…. The underpinning theoretical contributions span a range from abstract algebra and non-commutative analysis to questions of organization of efficient numerical calculation. See www.datasig.ac.uk\/ (opens in new tab)<\/span><\/a><\/p>\n\n\n\n\n\n

10\/18\/2023:<\/strong> Is RLHF More Difficult than Standard RL?, Chi Jin<\/p>\n\n\n\n\n\n

Abstract:<\/strong> Reinforcement learning from Human Feedback (RLHF) learns from preference signals, while standard Reinforcement Learning (RL) directly learns from reward signals. Preferences arguably contain less information than rewards, which makes preference-based RL seemingly more difficult. This work theoretically proves that, for a wide range of preference models, we can solve preference-based RL directly using existing algorithms and techniques for reward-based RL, with small or no extra costs. Specifically, (1) for preferences that are drawn from reward-based probabilistic models, we reduce the problem to robust reward-based RL that can tolerate small errors in rewards; (2) for general arbitrary preferences where the objective is to find the von Neumann winner, we reduce the problem to multiagent reward-based RL which finds Nash equilibria for factored Markov games under a restricted set of policies. The latter case can be further reduced to adversarial MDP when preferences only depend on the final state. We instantiate all reward-based RL subroutines by concrete provable algorithms and apply our theory to a large class of models including tabular MDPs and MDPs with generic function approximation. We further provide guarantees when K-wise comparisons are available.<\/p>\n\n\n\n

\"a<\/figure>\n\n\n\n

Bio:<\/strong> Chi Jin is an assistant professor at the Electrical and Computer Engineering department of Princeton University. He obtained his PhD degree in Computer Science at University of California, Berkeley, advised by Michael I. Jordan. His research mainly focuses on theoretical machine learning, with special emphasis on nonconvex optimization and Reinforcement Learning (RL). In nonconvex optimization, he provided the first proof showing that first-order algorithm (stochastic gradient descent) is capable of escaping saddle points efficiently. In RL, he provided the first efficient learning guarantees for Q-learning and least-squares value iteration algorithms when exploration is necessary. His works also lay the theoretical foundation for RL with function approximation, multiagent RL and partially observable RL<\/p>\n\n\n\n

Video (opens in new tab)<\/span><\/a><\/p>\n\n\n\n\n\n

8\/24\/2023:<\/strong> On the Power of Foundation Models, Yang Yuan<\/p>\n\n\n\n\n\n

Abstract:<\/strong> With infinitely many high-quality data points, infinite computational power, an infinitely large foundation model with a perfect training algorithm and guaranteed zero generalization error on the pretext task, can the model be used for everything? This question cannot be answered by the existing theory of representation, optimization or generalization, because the issues they mainly investigate are assumed to be nonexistent here. In this paper, we show that category theory provides powerful machinery to answer this question. We have proved three results. The first one limits the power of prompt-based learning, saying that the model can solve a downstream task with prompts if and only if the task is representable. The second one says fine tuning does not have this limit, as a foundation model with the minimum required power (up to symmetry) can theoretically solve downstream tasks for the category defined by pretext task, with fine tuning and enough resources. Our final result can be seen as a new type of generalization theorem, showing that the foundation model can generate unseen objects from the target category (e.g., images) using the structural information from the source category (e.g., texts). Along the way, we provide a categorical framework for supervised and self-supervised learning, which might be of independent interest.<\/p>\n\n\n\n

\"a<\/figure>\n\n\n\n

Bio:<\/strong> Yang Yuan is now an assistant professor at IIIS, Tsinghua. He finished his undergraduate study at Peking University in 2012. Afterwards, he received his PhD at Cornell University in 2018, advised by Professor Robert Kleinberg. Before joining Tsinghua, he spent one year at MIT Institute for Foundations of Data Science (MIFODS) as a postdoc researcher. He works on AI+Healthcare, AI Theory and Applied Category Theory.<\/p>\n\n\n\n

Video (opens in new tab)<\/span><\/a><\/p>\n\n\n\n\n\n

4\/27\/2023:<\/strong> Understanding Adam and AdamW through proximal updates, scale-freeness, and relaxed smoothness, Francesco Orabona<\/p>\n\n\n\n\n\n

Abstract:<\/strong> Adam and AdamW are the most commonly used algorithms for training deep neural networks due to their remarkable performance. However, despite a massive amount of research, it is fair to say that we are still far from understanding the true reasons why they work so well. In this talk, I’ll show you some recent results on unique characteristics of Adam and AdamW.
First, I’ll show how AdamW can be easily understood as an approximation of a proximal update on the squared L2 regularizer. Next, I’ll show that, contrary to Adam, AdamW’s update is “scale-free”, i.e., its update is invariant to component-wise rescaling of the gradients. I’ll show how scale-freeness provides an automatic preconditioning and how it correlates with the better performance of AdamW over Adam on deep learning experiments. Finally, I’ll show the first analysis of a (minor) variant of Adam, that has a provably advantage over SGD for functions that satisfy a relaxed smoothness assumption, like the objective functions of Transformers.<\/p>\n\n\n\n

\"Francesco<\/figure>\n\n\n\n

Bio:<\/strong> Francesco Orabona is an Associate Professor of Electrical & Computer Engineering at Boston University. His research interests lie in online learning, optimization, and statistical learning theory. He obtained his Ph.D. from the University of Genova in 2007. He previously was an Assistant Professor of Computer Science at Stony Brook University, a Senior Research Scientist at Yahoo Labs, and a Research Assistant Professor at the Toyota Technological Institute at Chicago. He received a Faculty Early Career Development (CAREER) from NSF in 2021 and a Google Research Award in 2017.<\/p>\n\n\n\n

Video (opens in new tab)<\/span><\/a><\/p>\n\n\n\n\n\n

3\/10\/2023:<\/strong> Modeling Multiagent Game Dynamics: Approaches to Equilibrium Computation and Incentive Analysis, Xiaotie Deng<\/p>\n\n\n\n\n\n

Abstract: <\/strong>This talk explores various research approaches to modeling the computation of equilibria and analysis of incentives in game dynamics. We discuss computational complexity, sequential and interactive optimization, and equilibrium analysis in multiagent systems.<\/p>\n\n\n\n

\"a<\/figure>\n\n\n\n

Bio:<\/strong> Xiaotie Deng is a Chair Professor at Peking University with a Ph.D. from Stanford University. His research focuses on algorithmic game theory, particularly in the context of the Internet and Blockchain Economics. Deng has taught at several universities and is a fellow of the ACM, IEEE, and CSIAM. He is a foreign member of Academia Europaea and received the 2022 Test of Time Award from ACM SIGecom.<\/p>\n\n\n\n

Video (opens in new tab)<\/span><\/a><\/p>\n\n\n\n\n\n

1\/12\/2023: <\/strong>Passive and Active Multi-Task Representation Learning, Simon (Shaolei) Du<\/p>\n\n\n\n\n\n

Abstract: <\/strong>Representation learning has been widely used in many applications. In this talk, I will present our work which uncovers when and why representation learning provably improves the sample efficiency, from a statistical learning point of view. Furthermore, I will talk about how to actively select the most relevant task to boost the performance.<\/p>\n\n\n\n

\"Simon<\/figure>\n\n\n\n

Bio:<\/strong> Simon Shaolei Du is an assistant professor in the Paul G. Allen School of Computer Science & Engineering at the Universityof Washington. His research interests are broadly in machine learning, such as deep learning, representation learning, and reinforcement learning. Prior to starting as faculty, he was a postdoc at the Institute for Advanced Study of Princeton. He completed his Ph.D. in Machine Learning at Carnegie Mellon University. Simon’s research has been recognized by a Samsung AI Researcher of the Year Award, an NSF CAREER award, an Nvidia Pioneer Award, a AAAI New Faculty Highlights, and a Distinguished Dissertation Award honorable mention from CMU.<\/p>\n\n\n\n

Video (opens in new tab)<\/span><\/a><\/p>\n\n\n\n

<\/p>\n\n\n\n\n\n

12\/22\/2022:<\/strong> Reward-free Reinforcement Learning via Sample-Efficient Representation Learning, Yingbin Liang<\/p>\n\n\n\n\n\n

Abstract:<\/strong> As reward-free reinforcement learning (RL) becomes a powerful framework for a variety of multi-objective applications, representation learning arises as an effective technique to deal with the curse of dimensionality in reward-free RL. However, the existing algorithms of representation learning in reward-free RL still suffers seriously from high sample complexity, although they are polynomially efficient. In this talk, I will first present a novel representation learning algorithm that we propose for reward-free RL. We show that such an algorithm provably finds near-optimal policy as well as attaining near-accurate system identification via reward-free exploration, with significantly improved sample complexity compared to the best-known result before. I will then present our characterization of the benefit of representation learning in reward-free multitask (a.k.a. meta) RL as well as the benefit of employing the learned representation from upstream to downstream tasks. I will conclude my talk with remarks of future directions. The work to be presented was jointly with Yuan Cheng (USTC), Ruiquan Huang (PSU), Dr. Songtao Feng (OSU), Prof. Jing Yang (PSU), and Prof. Hong Zhang (USTC).<\/p>\n\n\n\n

\"a<\/figure>\n\n\n\n

Bio:<\/strong> Dr. Yingbin Liang is currently a Professor at the Department of Electrical and Computer Engineering at the Ohio State University (OSU), and a core faculty of the Ohio State Translational Data Analytics Institute (TDAI). She also serves as the Deputy Director of the AI-Edge Institute at OSU. Dr. Liang received the Ph.D. degree in Electrical Engineering from the University of Illinois at Urbana-Champaign in 2005, and served on the faculty of University of Hawaii and Syracuse University before she joined OSU. Dr. Liang’s research interests include machine learning, optimization, information theory, and statistical signal processing. Dr. Liang received the National Science Foundation CAREER Award and the State of Hawaii Governor Innovation Award in 2009. She also received EURASIP Best Paper Award in 2014.<\/p>\n\n\n\n

Video (opens in new tab)<\/span><\/a><\/p>\n\n\n\n\n\n

<\/p>\n\n\n\n

11\/04\/2022:<\/strong> Player-optimal Stable Regret for Bandit Learning in Matching Markets, Shuai Li<\/p>\n\n\n\n\n\n

Abstract: <\/strong>The problem of matching markets has been studied for a long history in the literature due to its wide range of applications. Finding a stable matching is a common equilibrium objective in this problem. Since market participants are usually uncertain of their preferences, a rich line of recent works study the online setting where one-side participants (players) learn their unknown preferences from iterative interactions with the other side (arms). Most previous works in this line are only able to derive theoretical guarantees for player-pessimal stable regret, which is defined compared with the players\u2019 least-preferred stable matching. <\/p>\n\n\n\n

However, under the pessimal stable matching, players only obtain the least reward among all stable matchings. To maximize players\u2019 profits, the player-optimal stable matching would be the most desirable Though Basu et al. [2021] successfully bring an upper bound for player-optimal stable regret, their result can be exponentially large if players\u2019 preference gap is small. Whether a polynomial guarantee for this regret exists is a significant but still open problem. In this work, we provide a new algorithm and show that the optimal stable regret of each player can be upper bounded by O(K log T \/ \u2206^2) where K is the number of arms, T is the horizon and \u2206 is the players\u2019 minimum preference gap. This result significantly improves previous works which either has a weaker player-pessimal stable matching objective or applies only for markets with special assumptions. When the preferences of participants satisfy some special conditions, our regret upper bound also matches the previously derived lower bound This work is accepted at SODA 2023.<\/p>\n\n\n\n

\"Shuai<\/figure>\n\n\n\n

Bio:<\/strong> Shuai Li is currently an Assistant Professor in the John Hopcroft Center of Shanghai Jiao Tong University. She received PhD degree in Computer Science from the Chinese University of Hong Kong, master degree in Mathematics from University of the Chinese Academy of Sciences and bachelor degree in Mathematics from Zhejiang University. Her research interests include machine learning theory, bandit algorithms and reinforcement learning algorithms. She has published 40+ papers in top machine learning conferences like ICML\/NeurIPS\/AAAI\/IJCAI\/KDD and serves as reviewers in these conferences. She is a recipient of Shanghai Sailing Program 2020 and Google PhD fellowship 2018.<\/p>\n\n\n\n

<\/p>\n\n\n\n

Video<\/p>\n\n\n\n\n\n

<\/p>\n\n\n\n

10\/13\/2022:<\/strong> What Should a Good Deep Neural Network Look Like? Insights from a Layer-Peeled Model and the Law of Equi-Separation, Weijie Su<\/p>\n\n\n\n\n\n

Abstract:<\/strong> In this talk, we will investigate the emergence of geometric patterns in well-trained deep learning models by making use of a layer-peeled model and the law of equi-separation. The former is a nonconvex optimization program that models the last-layer features and weights. We use the model to shed light on the neural collapse phenomenon of Papyan, Han, and Donoho, and to predict a hitherto-unknown phenomenon that we term minority collapse in imbalanced training. This is based on joint work with Cong Fang, Hangfeng He, and Qi Long.<\/p>\n\n\n\n

The law of equi-separation is a pervasive empirical phenomenon that describes how data are separated according to their class membership from the bottom to the top layer in a well-trained neural network. We will show that, through extensive computational experiments, neural networks improve data separation through layers in a simple exponential manner. This law leads to roughly equal ratios of separation that a single layer is able to improve, thereby showing that all layers are created equal. We will conclude the talk by discussing the implications of this law on the interpretation, robustness, and generalization of deep learning, as well as on the inadequacy of some existing approaches toward demystifying deep learning. This is based on joint work with Hangfeng He.<\/p>\n\n\n\n

\"a<\/figure>\n\n\n\n

Bio:<\/strong> Weijie Su is an Associate Professor in the Wharton Statistics and Data Science Department and, by courtesy, in the Department of Computer and Information Science, at the University of Pennsylvania. He is a co-director of Penn Research in Machine Learning. Prior to joining Penn, he received his Ph.D. from Stanford University in 2016 and his bachelor\u2019s degree from Peking University in 2011. His research interests span privacy-preserving data analysis, deep learning theory, optimization, high-dimensional statistics, and mechanism design. He is a recipient of the Stanford Theodore Anderson Dissertation Award in 2016, an NSF CAREER Award in 2019, an Alfred Sloan Research Fellowship in 2020, the SIAM Early Career Prize in Data Science in 2022, and the IMS Peter Gavin Hall Prize in 2022.<\/p>\n\n\n\n

Video (opens in new tab)<\/span><\/a><\/p>\n\n\n\n\n\n

<\/p>\n\n\n\n

09\/22\/2022:<\/strong> On the (Non)smoothness of Neural Network Training, Jingzhao Zhang<\/p>\n\n\n\n\n\n

Abstract:<\/strong> In this talk, we will discuss the following question\u2015why is neural network training non-smooth from an optimization perspective, and how should we analyze convergence for non smooth problems. We start by showing that the non-smoothness is essential to standard neural network training procedures, and that network training converges in an unstable manner. We then provide theoretical models for understanding why optimization in neural network is unstable, and how new definitions of convergence can reconcile theory with practice.<\/p>\n\n\n\n

\"people\"<\/figure>\n\n\n\n

Bio:<\/strong> Jingzhao Zhang is an assistant professor at Tsinghua, IIIS. He graduated from MIT EECS under the supervision of Prof Ali Jadbabaie and Prof Suvrit Sra. His research focuses on providing theoretical justifications and analyses to practical large-scale optimization algorithms. He is also interested in machine learning applications, especially those involving dynamical system formulations.<\/p>\n\n\n\n

Video (opens in new tab)<\/span><\/a><\/p>\n\n\n\n\n\n

<\/p>\n\n\n\n

08\/25\/2022:<\/strong> Local Elasticity of Neural Networks and Its Inspired Theory, Zhun Deng<\/p>\n\n\n\n\n\n

Abstract: <\/strong>In this talk, I will briefly review local elasticity of neural networks proposed by He et al. Then, based on that, I will introduce a new type of stability notion, which can improve over classical stability notions with respect to generalization behavior in certain situations. Specifically, among different notions of stability, uniform stability is arguably the most popular one, which yields exponential generalization bounds. However, uniform stability only considers the worst-case loss change (or so-called sensitivity) by removing a single data point, which is distribution-independent and therefore undesirable. There are many cases that the worst-case sensitivity of the loss is much larger than the average sensitivity taken over the single data point that is removed, especially in some advanced models such as random feature models or neural networks. Many previous works try to mitigate the distribution independent issue by proposing weaker notions of stability, however, they either only yield polynomial bounds or the bounds derived do not vanish as sample size goes to infinity. Given that, we propose locally elastic stability as a weaker and distribution-dependent stability notion, which still yields exponential generalization bounds. We further demonstrate that locally elastic stability implies tighter generalization bounds than those derived based on uniform stability in many situations by revisiting the examples of bounded support vector machines, regularized least square regressions, and stochastic gradient descent.<\/p>\n\n\n\n

\"Zhun<\/figure>\n\n\n\n

Bio:<\/strong> Zhun is a postdoctoral researcher with Toniann Pitassi (opens in new tab)<\/span><\/a> and Richard Zemel (opens in new tab)<\/span><\/a> at Columbia University, and also part of Simons Collaboration on the Theory of Algorithmic Fairness (opens in new tab)<\/span><\/a>. Previously, Zhun got his Ph.D. in Computer Science at Harvard University, advised by Cynthia Dwork (opens in new tab)<\/span><\/a>. His research interests lie at the intersection of theoretical computer science, machine learning, and social science. His work aims to make data science more trustworthy, statistically rigorous, and aligned with societal values. Here is the website: https:\/\/www.zhundeng.org (opens in new tab)<\/span><\/a>.<\/p>\n\n\n\n

Video (opens in new tab)<\/span><\/a><\/p>\n\n\n\n\n\n

<\/p>\n\n\n\n

08\/04\/2022:<\/strong> Toward Understanding Self-Supervised Pre-training, Tengyu Ma<\/p>\n\n\n\n\n\n

Abstract:  <\/strong>AI is undergoing a paradigm shift the rise of models that are pretrained with self-supervised learning and then adapted to a wide range of downstream tasks. Despite the unprecedented empirical success, why and how pretrained models work still largely remains a mystery. This talk will discuss recent works on analyzing contrastive learning, a family of popular self-supervised pretraining methods that learn visual representations\/embeddings of images from unlabeled data. We will develop a framework that views contrastive learning as a parametric version of spectral clustering on a so-called population positive-pair graph. We will also analyze the adaptability of the representations and provide sample complexity bounds. Finally, I will briefly discuss two follow-up works that study self-supervised representations\u2019 performance under imbalanced pretraining datasets and for shifting test distributions.  Joint works with Jeff Z. Haochen, Colin Wei, Kendrick Shen, Robbie Jones, Ananya Kumar, Sang Michael Xie, Adrien Gaidon, and Percy Liang.<\/p>\n\n\n\n

\"Tengyu<\/figure>\n\n\n\n

Bio:<\/strong> Tengyu Ma is an assistant professor of Computer Science and Statistics at Stanford University. He received his Ph.D. from Princeton University and B.E. from Tsinghua University. His research interests include topics in machine learning and algorithms, such as deep learning and its theory, non-convex optimization, deep reinforcement learning, representation learning, and high-dimensional statistics. He is a recipient of the ACM Doctoral Dissertation Award Honorable Mention, the Sloan Fellowship, and NSF CAREER Award.<\/p>\n\n\n\n

Video (opens in new tab)<\/span><\/a> | Slides (opens in new tab)<\/span><\/a><\/p>\n\n\n\n\n\n

<\/p>\n\n\n\n

07\/22\/2022:<\/strong> Unveiling Transformers with LEGO, Sebastien Bubeck<\/p>\n\n\n\n\n\n

Abstract:<\/strong><\/p>\n\n\n\n

The discovery of the transformer architecture was a paradigm shifting event for deep learning. However, these architectures are arguably even harder to understand than say convolutional neural networks. In this work we propose a synthetic task, called LEGO, to probe the inner workings of transformers. We obtain some insights on multi-head attention, the effect of pretraining, as well as overfitting issues. Joint work with Yi Zhang, Arturs Backurs, Ronen Eldan, Suriya Gunasekar, and Tal Wagner.<\/p>\n\n\n\n

\"a<\/figure>\n\n\n\n

Bio: <\/strong>Sebastien Bubeck manages the Machine Learning Foundations team in MSR Redmond. He has worked on multi-armed bandits, convex optimization, online algorithms, and adversarial examples, winning best papers at COLT (2009 and 2016), ALT (2018), and NeurIPS (2018 and 2021). At the moment he is trying to understand Transformers.<\/p>\n\n\n\n

Video (opens in new tab)<\/span><\/a><\/p>\n\n\n\n\n","protected":false},"excerpt":{"rendered":"

MSR Asia Theory Lecture Series is a forum where we invite researchers around the world to share the latest theoretical advances in big data, artificial intelligence, and related areas. The Lecture series are broadcast live over Teams. If you would like to receive the information about the upcoming talks, please send email \u201cSubscribe to the […]<\/p>\n","protected":false},"featured_media":869946,"template":"","meta":{"msr-url-field":"","msr-podcast-episode":"","msrModifiedDate":"","msrModifiedDateEnabled":false,"ep_exclude_from_search":false,"msr_startdate":"2022-07-22","msr_enddate":"2024-01-24","msr_location":"Virtual \/ MSRA","msr_expirationdate":"","msr_event_recording_link":"","msr_event_link":"","msr_event_link_redirect":false,"msr_event_time":"","msr_hide_region":false,"msr_private_event":false,"footnotes":""},"research-area":[13556,13546],"msr-region":[],"msr-event-type":[197944,210063],"msr-video-type":[],"msr-locale":[268875],"msr-program-audience":[],"msr-post-option":[],"msr-impact-theme":[],"class_list":["post-867756","msr-event","type-msr-event","status-publish","has-post-thumbnail","hentry","msr-research-area-artificial-intelligence","msr-research-area-computational-sciences-mathematics","msr-event-type-hosted-by-microsoft","msr-event-type-workshop","msr-locale-en_us"],"msr_about":"\n\n\n\n\n

MSR Asia Theory Lecture Series is a forum where we invite researchers around the world to share the latest theoretical advances in big data, artificial intelligence, and related areas. The Lecture series are broadcast live over Teams. If you would like to receive the information about the upcoming talks, please send email \u201cSubscribe to the Lecture Series\u201d to MSRA.TheoryCenter@outlook.com<\/em><\/a> or subscribe to Wechat official account \"\u5fae\u8f6f\u5b66\u672f\u5408\u4f5c\u201c. <\/p>\n\n\n\n

Lectures<\/h2>\n\n\n\n

8\/21\/2024:<\/strong> Regularization and Optimal Multiclass Learning, Shang-Hua Teng<\/p>\n\n\n\n\n\n

Abstract: <\/strong>The quintessential learning algorithm of empirical risk minimization (ERM) is known to fail in various settings for which uniform convergence does not characterize learning. Relatedly, the practice of machine learning is rife with considerably richer algorithmic techniques, perhaps the most notable of which is regularization. Nevertheless, no such technique or principle has broken away from the pack to characterize optimal learning in these more general settings. The purpose of this work is to precisely characterize the role of regularization in perhaps the simplest setting for which ERM fails: multiclass learning with arbitrary label sets. Using one-inclusion graphs (OIGs), we exhibit optimal learning algorithms that dovetail with tried-and-true algorithmic principles: Occam\u2019s Razor as embodied by structural risk minimization (SRM), the principle of maximum entropy, and Bayesian inference. We also extract from OIGs a combinatorial sequence we term the Hall complexity, which is the first to characterize a problem\u2019s transductive error rate exactly. Lastly, we introduce a generalization of OIGs and the transductive learning setting to the agnostic case, where we show that optimal orientations of Hamming graphs \u2013 judged using nodes\u2019 outdegrees minus a system of node-dependent credits \u2013 characterize optimal learners exactly. We demonstrate that an agnostic version of the Hall complexity again characterizes error rates exactly, and exhibit an optimal learner using maximum entropy programs.<\/p>\n\n\n\n

\"Shang-Hua<\/figure>\n\n\n\n

Bio:<\/strong> Shang-Hua Teng is a University Professor and Seely G. Mudd Professor of Computer Science and Mathematics at USC. He is a fellow of SIAM, ACM, and Alfred P. Sloan Foundation, and has twice won the G\u00f6del Prize, first in 2008, for developing smoothed analysis, and then in 2015, for designing the breakthrough scalable Laplacian solver. Citing him as, \u201cone of the most original theoretical computer scientists in the world\u201d, the Simons Foundation named him a 2014 Simons Investigator to pursue long-term curiosity-driven fundamental research. He also received the 2009 Fulkerson Prize,\u00a0 2023 Science & Technology Award for Overseas Chinese from the China Computer Federation, 2022 ACM SIGecom Test of Time Award (for settling the complexity of computing a Nash equilibrium), 2021 ACM STOC Test of Time Award (for smoothed analysis), 2020 Phi Kappa Phi Faculty Recognition Award (2020)\u00a0 for his book Scalable Algorithms for Data and Network Analysis, 2011 ACM STOC Best Paper Award (for improving maximum-flow minimum-cut algorithms). In addition, he and collaborators developed the first optimal well-shaped Delaunay mesh generation algorithms for arbitrary three-dimensional domains, settled the Rousseeuw-Hubert regression-depth conjecture in robust statistics, and resolved two long-standing complexity-theoretical questions regarding the Sprague-Grundy theorem in combinatorial game theory. For his industry work with Xerox, NASA, Intel, IBM, Akamai, and Microsoft, he received fifteen patents in areas including compiler optimization, Internet technology, and social networks. Dedicated to teaching his daughter to speak Chinese as the sole Chinese-speaking parent in an otherwise English-speaking family and environment, he has also become fascinated with children's bilingual learning.<\/p>\n\n\n\n\n\n

4\/25\/2024:<\/strong> Toward Demystifying Grokking, Wei Hu<\/p>\n\n\n\n\n\n

Abstract: <\/strong>Grokking is a surprising phenomenon in which a neural network first memorizes the training set, resulting in perfect training accuracy but near-random test accuracy, and after training for sufficiently longer, it suddenly transitions to perfect test accuracy. I will talk about our recent work toward theoretically explaining the grokking phenomenon. First, we show that a dichotomy of early and late-phase implicit biases can provably induce grokking, and exemplify it in simplified settings such as sparse linear models and matrix completion. Second, we show that in a simple non-linear classification task, grokking also provably occurs and coincides with another intriguing phenomenon known as benign overfitting. <\/p>\n\n\n\n

\"Weihu\"<\/figure>\n\n\n\n

Bio:<\/strong> Wei Hu is an Assistant Professor in Computer Science and Engineering at the University of Michigan. He obtained his Ph.D. degree from Princeton University, where he was advised by Sanjeev Arora, and his Bachelor's degree from Tsinghua University, where he was a member of Yao Class. His research interest is in the theoretical and scientific foundations of deep learning. He is a recipient of the Google Research Scholar award and the Siebel Scholarship.<\/p>\n\n\n\n

Slides<\/a><\/p>\n\n\n\n\n\n

2\/28\/2024:<\/strong> Feature learning of neural network by mean field Langevin dynamics: Optimization and generalization, Taiji Suzuki<\/p>\n\n\n\n\n\n

Abstract: <\/strong>In this talk, I will discuss the feature learning ability of neural networks from statistical and optimization perspectives. In particular, I will present recent developments of theory of the mean-field Langevin dynamics (MFLD) and its application to neural network training. MFLD is a nonlinear generalization of the gradient Langevin dynamics (GLD) that minimizes an entropy regularized convex function defined on the space of probability distributions, and it naturally arises from the optimization of two-layer neural networks via (noisy) gradient descent. In the first half, I will present the convergence result of MFLD and explain how the convergence of MFLD is connected to the duality gap through the log-Sobolev inequality of the so-called proximal Gibbs measure. In addition to that, the time-space discretization of MFLD will be addressed. It can be shown that the discretization error can be bounded uniformly in time unlike existing work. In the latter half, I will discuss the generalization error analysis of neural networks trained by MFLD. Addressing a binary classification problem, we have a general form of a test classification error bound that provides a fast learning rate based on a local Rademacher complexity analysis. By applying this general framework to the k-sparse parity problem, we demonstrate how the feature learning helps its sample complexity compared with the kernel methods. Finally, we also discuss how anisotropic structure of input will affect the sample complexity and computational complexity. If the data is well aligned to the target function, both sample and computational complexities are significantly mitigated.<\/p>\n\n\n\n

\"Photo<\/figure>\n\n\n\n

Bio:<\/strong> Taiji Suzuki is currently an Associate Professor in the Department of Mathematical Informatics at the University of Tokyo. He also serves as the team leader of \u201cDeep learning theory\u201d team in AIP-RIKEN. He received his Ph.D. degree in information science and technology from the University of Tokyo in 2009. He worked as an assistant professor in the department of mathematical informatics, the University of Tokyo between 2009 and 2013, and then he was an associate professor in the department of mathematical and computing science, Tokyo Institute of Technology between 2013 and 2017. He served as area chairs of premier conferences such as NeurIPS, ICML, ICLR and AISTATS, a program chair of ACML2019, and an action editor of the Annals of Statistics. He received the Outstanding Paper Award at ICLR in 2021, the MEXT Young Scientists\u2019 Prize, and Outstanding Achievement Award in 2017 from the Japan Statistical Society. He is interested in deep learning theory, nonparametric statistics, high dimensional statistics, and stochastic optimization. In particular, he is mainly working on deep learning theory from several aspects such as representation ability, generalization ability and optimization ability. He also has devoted stochastic optimization to accelerate large scale machine learning problems including variance reduction methods, Nesterov\u2019s acceleration, federated learning and non-convex noisy optimization.<\/p>\n\n\n\n\n\n

1\/25\/2024:<\/strong> Recent Advances in Coresets for Clustering, Shaofeng Jiang<\/p>\n\n\n\n\n\n

Abstract: <\/strong>Coreset is a popular data reduction technique. Roughly, a coreset is a tiny proxy of the dataset, such that the objective function evaluated on the coreset for every feasible solution approximates that on the original dataset. Coresets are particularly useful for dealing with big data since they can usually be constructed in sublinear models efficiently, including streaming and parallel computing.<\/p>\n\n\n\n

The study of coresets for clustering is very fruitful, and nearly tight bounds have recently been obtained for well-known problems such as k-median and k-means and their variants. In this talk, I will introduce the recent advances in coresets for clustering, with a focus on presenting several fundamental sampling techniques, including importance sampling and hierarchical uniform sampling, for the construction of coresets. I will conclude the talk by discussing future directions for the study of coreset (and beyond).<\/p>\n\n\n\n

\"a<\/figure>\n\n\n\n

Bio:<\/strong> Shaofeng Jiang is an assistant professor at Peking University. He obtained his PhD at the University of Hong Kong, and before he joined PKU, he worked as a postdoctoral researcher at the Weizmann Institute of Science, and an assistant professor at Aalto University. His research interest generally lies in theoretical computer science, with a focus on sublinear algorithms.<\/p>\n\n\n\n

Slides<\/a><\/p>\n\n\n\n\n\n

11\/28\/2023:<\/strong> Textbooks Are All You Need, Yin Tat Lee<\/p>\n\n\n\n\n\n

Abstract: <\/strong>Many believed that training large language models (LLMs) required using a vast dataset and an immense number of parameters. This is computationally demanding, requiring significant GPU resources. GPT-4 exemplified this belief, being a colossal model trained on a vast corpus.<\/p>\n\n\n\n

In light of this, we sought to discern if such impressive results could be achieved with smaller models and limited data for code generation. We demonstrate that with high-quality data, the demand for expansive datasets and a multitude of parameters lessens. The outcome was a few billion-size model, which not only met or exceeded the performance of existing open-source models but did so utilizing a mere 1\/1000th of compute in training. Moreover, we will discuss specific emergent properties observed in the model after its fine-tuning on coding exercises.<\/p>\n\n\n\n

\"a<\/figure>\n\n\n\n

Bio:<\/strong> Yin Tat Lee is a Principal Researcher at MSR and an Associate Professor of Paul G. Allen School of Computer Science & Engineering at the University of Washington. His research interests are convex optimization, convex geometry, graph algorithms, online algorithms, and differential privacy. During his career, he has received a variety of awards, including Best Paper Awards at FOCS, SODA and NeurIPS, Sprowls Award, NSF CAREER Award, A.W. Tucker Prize, Microsoft Research Faculty Fellowship, Sloan Research Fellowship, and Packard Fellowships.<\/p>\n\n\n\n\n\n

10\/30\/2023:<\/strong> Intelligent Heuristics Are the Future of Computing, Shang-Hua Teng<\/p>\n\n\n\n\n\n

Abstract: <\/strong>Back in 1988, the partial game trees explored by computer chess programs were among the largest search structures in real-world computing. Because the game tree is too large to be fully evaluated, chess programs must make heuristic strategic decisions based on partial information, making it an illustrative subject for teaching AI search. In one of his lectures that year on AI search for games and puzzles, Professor Hans Berliner \u2014 a pioneer of computer chess programs \u2014 stated: \u201cIntelligent heuristics are the future of computing.\u201d<\/p>\n\n\n\n

As a student in the field of the theory of computation, I was naturally perplexed but fascinated by this perspective. I had been trained to believe that \u201cAlgorithms and computational complexity theory are the foundation of computer science.\u201d However, as it happens, my attempts to understand heuristics in computing have subsequently played a significant role in my career as a theoretical computer scientist. I have come to realize that Berliner\u2019s postulation is a far-reaching worldview, particularly in the age of big, rich, complex, and multifaceted data and models, when computing has ubiquitous interactions with science, engineering, humanity, and society.  <\/p>\n\n\n\n

In this talk, I will share some of my experiences on the subject of heuristics in computing, presenting examples of theoretical attempts to understand the behavior of heuristics on real data, as well as efforts to design practical heuristics with desirable theoretical characterizations. My hope is that these theoretical insights from past heuristics \u2014 such as spectral partitioning, multilevel methods, evolutionary algorithms, and simplex methods \u2014 can shed light on and further inspire a deeper understanding of the current and future techniques in AI and data mining.<\/p>\n\n\n\n

\"Shang-Hua<\/figure>\n\n\n\n

Bio:<\/strong>  Shang-Hua Teng is a University Professor and Seely G. Mudd Professor of Computer Science and Mathematics at USC. He is a fellow of SIAM, ACM, and Alfred P. Sloan Foundation, and has twice won the G\u00f6del Prize, first in 2008, for developing smoothed analysis, and then in 2015, for designing the breakthrough scalable Laplacian solver. Citing him as, \u201cone of the most original theoretical computer scientists in the world\u201d, the Simons Foundation named him a 2014 Simons Investigator to pursue long-term curiosity-driven fundamental research. He also received the 2009 Fulkerson Prize, 2021 ACM STOC Test of Time Award (for smoothed analysis), 2022 ACM SIGecom Test of Time Award (for settling the complexity of computing a Nash equilibrium), 2011 ACM STOC Best Paper Award (for improving maximum-flow minimum-cut algorithms), and 2023 Science & Technology Award for Overseas Chinese<\/a> from the China Computer Federation. In addition, he and collaborators developed the first optimal well-shaped Delaunay mesh generation algorithms for arbitrary three-dimensional domains, settled the Rousseeuw-Hubert regression-depth conjecture in robust statistics, and resolved two long-standing complexity-theoretical questions regarding the Sprague-Grundy theorem in combinatorial game theory. For his industry work with Xerox, NASA, Intel, IBM, Akamai, and Microsoft, he received fifteen patents in areas including compiler optimization, Internet technology, and social networks. Dedicated to teaching his daughter to speak Chinese as the sole Chinese-speaking parent in an otherwise English-speaking family and environment, he has also become fascinated with children's bilingual learning.<\/p>\n\n\n\n\n\n

10\/23\/2023:<\/strong> The mathematics of complex streamed data, Terry Lyons<\/p>\n\n\n\n\n\n

Abstract:<\/strong> Complex streams of evolving data are better understood by their effects on nonlinear systems than by their values at times. The question of which nonlinear systems would seem to be context dependent, but it is not. Core to rough path theory is a simple universal nonlinear system that captures all the information needed to predict any response to any nonlinear system. This idealized mathematical feature set is known as the signature of the stream. Its abstract simplicity opens the possibilities for understanding and working with streams in the same context free way that calculators work with numbers. Signature-based techniques offer simple to apply universal numerical methods that are robust to irregular data and efficient at representing the order of events and complex oscillatory data. Specific software can be developed and then applied across many contexts. Signatures underpin prize winning contributions in recognizing Chinese handwriting, in detecting sepsis, and in generating financial data, and most recently in the ability to score streams as outliers against a corpus of normal streams. This principled outlier technology has emerged as a powerful unifying technique; it identifies radio frequency interference in astronomical data, brain injury from meg data.... The underpinning theoretical contributions span a range from abstract algebra and non-commutative analysis to questions of organization of efficient numerical calculation. See www.datasig.ac.uk\/<\/a>. New hyperbolic partial differential equations have been developed that compute the \u201csignature kernel\u201d trick without ever having to introduce signatures. Neural controlled differential equations can directly harness approaches such as the log ode method and consume the control as a rough path.<\/p>\n\n\n\n

\"a<\/figure>\n\n\n\n

Bio:<\/strong> <\/p>\n\n\n\n

Professor Terry Lyons is Wallis Professor Emeritus and Professor of Mathematics at the University of Oxford. He is currently PI of the DataS\u0131g program (primarily funded by EPSRC), and of the complementary research programme CIMDA-Oxford (under the support of InnoHK and the HKSAR). He was a founding member (2007) of, and then Director (2011-2015) of, the Oxford Man Institute of Quantitative Finance; he was the Director of the Wales Institute of Mathematical and Computational Sciences (WIMCS; 2008-2011). He came to Oxford in 2000 having previously been Professor of Mathematics at Imperial College London (1993-2000), and before that he held the Colin Maclaurin Chair at Edinburgh (1985-93). He was President of the London Mathematical Society (2013-15).<\/p>\n\n\n\n

Professor Lyons\u2019s long-term research interests are focused on the mathematics of streamed data and building strong applications from these mathematical insights. His current goal is to use rough path theory to develop innovative and truly generic tools for working with streamed data and make these widely accessible through the python package RoughPy. One example of this synergy comes from the signature of a stream. Signatures underpin prize winning contributions in recognizing Chinese handwriting, in detecting sepsis, and in generating financial data, and most recently in the ability to score streams as outliers against a corpus of normal streams. This principled outlier technology has emerged as a powerful unifying technique; it identifies radio frequency interference in astronomical data, brain injury from meg data.... The underpinning theoretical contributions span a range from abstract algebra and non-commutative analysis to questions of organization of efficient numerical calculation. See www.datasig.ac.uk\/<\/a><\/p>\n\n\n\n\n\n

10\/18\/2023:<\/strong> Is RLHF More Difficult than Standard RL?, Chi Jin<\/p>\n\n\n\n\n\n

Abstract:<\/strong> Reinforcement learning from Human Feedback (RLHF) learns from preference signals, while standard Reinforcement Learning (RL) directly learns from reward signals. Preferences arguably contain less information than rewards, which makes preference-based RL seemingly more difficult. This work theoretically proves that, for a wide range of preference models, we can solve preference-based RL directly using existing algorithms and techniques for reward-based RL, with small or no extra costs. Specifically, (1) for preferences that are drawn from reward-based probabilistic models, we reduce the problem to robust reward-based RL that can tolerate small errors in rewards; (2) for general arbitrary preferences where the objective is to find the von Neumann winner, we reduce the problem to multiagent reward-based RL which finds Nash equilibria for factored Markov games under a restricted set of policies. The latter case can be further reduced to adversarial MDP when preferences only depend on the final state. We instantiate all reward-based RL subroutines by concrete provable algorithms and apply our theory to a large class of models including tabular MDPs and MDPs with generic function approximation. We further provide guarantees when K-wise comparisons are available.<\/p>\n\n\n\n

\"a<\/figure>\n\n\n\n

Bio:<\/strong> Chi Jin is an assistant professor at the Electrical and Computer Engineering department of Princeton University. He obtained his PhD degree in Computer Science at University of California, Berkeley, advised by Michael I. Jordan. His research mainly focuses on theoretical machine learning, with special emphasis on nonconvex optimization and Reinforcement Learning (RL). In nonconvex optimization, he provided the first proof showing that first-order algorithm (stochastic gradient descent) is capable of escaping saddle points efficiently. In RL, he provided the first efficient learning guarantees for Q-learning and least-squares value iteration algorithms when exploration is necessary. His works also lay the theoretical foundation for RL with function approximation, multiagent RL and partially observable RL<\/p>\n\n\n\n

Video<\/a><\/p>\n\n\n\n\n\n

8\/24\/2023:<\/strong> On the Power of Foundation Models, Yang Yuan<\/p>\n\n\n\n\n\n

Abstract:<\/strong> With infinitely many high-quality data points, infinite computational power, an infinitely large foundation model with a perfect training algorithm and guaranteed zero generalization error on the pretext task, can the model be used for everything? This question cannot be answered by the existing theory of representation, optimization or generalization, because the issues they mainly investigate are assumed to be nonexistent here. In this paper, we show that category theory provides powerful machinery to answer this question. We have proved three results. The first one limits the power of prompt-based learning, saying that the model can solve a downstream task with prompts if and only if the task is representable. The second one says fine tuning does not have this limit, as a foundation model with the minimum required power (up to symmetry) can theoretically solve downstream tasks for the category defined by pretext task, with fine tuning and enough resources. Our final result can be seen as a new type of generalization theorem, showing that the foundation model can generate unseen objects from the target category (e.g., images) using the structural information from the source category (e.g., texts). Along the way, we provide a categorical framework for supervised and self-supervised learning, which might be of independent interest.<\/p>\n\n\n\n

\"a<\/figure>\n\n\n\n

Bio:<\/strong> Yang Yuan is now an assistant professor at IIIS, Tsinghua. He finished his undergraduate study at Peking University in 2012. Afterwards, he received his PhD at Cornell University in 2018, advised by Professor Robert Kleinberg. Before joining Tsinghua, he spent one year at MIT Institute for Foundations of Data Science (MIFODS) as a postdoc researcher. He works on AI+Healthcare, AI Theory and Applied Category Theory.<\/p>\n\n\n\n

Video<\/a><\/p>\n\n\n\n\n\n

4\/27\/2023:<\/strong> Understanding Adam and AdamW through proximal updates, scale-freeness, and relaxed smoothness, Francesco Orabona<\/p>\n\n\n\n\n\n

Abstract:<\/strong> Adam and AdamW are the most commonly used algorithms for training deep neural networks due to their remarkable performance. However, despite a massive amount of research, it is fair to say that we are still far from understanding the true reasons why they work so well. In this talk, I'll show you some recent results on unique characteristics of Adam and AdamW.
First, I'll show how AdamW can be easily understood as an approximation of a proximal update on the squared L2 regularizer. Next, I'll show that, contrary to Adam, AdamW's update is \"scale-free\", i.e., its update is invariant to component-wise rescaling of the gradients. I'll show how scale-freeness provides an automatic preconditioning and how it correlates with the better performance of AdamW over Adam on deep learning experiments. Finally, I'll show the first analysis of a (minor) variant of Adam, that has a provably advantage over SGD for functions that satisfy a relaxed smoothness assumption, like the objective functions of Transformers.<\/p>\n\n\n\n

\"Francesco<\/figure>\n\n\n\n

Bio:<\/strong> Francesco Orabona is an Associate Professor of Electrical & Computer Engineering at Boston University. His research interests lie in online learning, optimization, and statistical learning theory. He obtained his Ph.D. from the University of Genova in 2007. He previously was an Assistant Professor of Computer Science at Stony Brook University, a Senior Research Scientist at Yahoo Labs, and a Research Assistant Professor at the Toyota Technological Institute at Chicago. He received a Faculty Early Career Development (CAREER) from NSF in 2021 and a Google Research Award in 2017.<\/p>\n\n\n\n

Video<\/a><\/p>\n\n\n\n\n\n

3\/10\/2023:<\/strong> Modeling Multiagent Game Dynamics: Approaches to Equilibrium Computation and Incentive Analysis, Xiaotie Deng<\/p>\n\n\n\n\n\n

Abstract: <\/strong>This talk explores various research approaches to modeling the computation of equilibria and analysis of incentives in game dynamics. We discuss computational complexity, sequential and interactive optimization, and equilibrium analysis in multiagent systems.<\/p>\n\n\n\n

\"a<\/figure>\n\n\n\n

Bio:<\/strong> Xiaotie Deng is a Chair Professor at Peking University with a Ph.D. from Stanford University. His research focuses on algorithmic game theory, particularly in the context of the Internet and Blockchain Economics. Deng has taught at several universities and is a fellow of the ACM, IEEE, and CSIAM. He is a foreign member of Academia Europaea and received the 2022 Test of Time Award from ACM SIGecom.<\/p>\n\n\n\n

Video<\/a><\/p>\n\n\n\n\n\n

1\/12\/2023: <\/strong>Passive and Active Multi-Task Representation Learning, Simon (Shaolei) Du<\/p>\n\n\n\n\n\n

Abstract: <\/strong>Representation learning has been widely used in many applications. In this talk, I will present our work which uncovers when and why representation learning provably improves the sample efficiency, from a statistical learning point of view. Furthermore, I will talk about how to actively select the most relevant task to boost the performance.<\/p>\n\n\n\n

\"Simon<\/figure>\n\n\n\n

Bio:<\/strong> Simon Shaolei Du is an assistant professor in the Paul G. Allen School of Computer Science & Engineering at the Universityof Washington. His research interests are broadly in machine learning, such as deep learning, representation learning, and reinforcement learning. Prior to starting as faculty, he was a postdoc at the Institute for Advanced Study of Princeton. He completed his Ph.D. in Machine Learning at Carnegie Mellon University. Simon's research has been recognized by a Samsung AI Researcher of the Year Award, an NSF CAREER award, an Nvidia Pioneer Award, a AAAI New Faculty Highlights, and a Distinguished Dissertation Award honorable mention from CMU.<\/p>\n\n\n\n

Video<\/a><\/p>\n\n\n\n

<\/p>\n\n\n\n\n\n

12\/22\/2022:<\/strong> Reward-free Reinforcement Learning via Sample-Efficient Representation Learning, Yingbin Liang<\/p>\n\n\n\n\n\n

Abstract:<\/strong> As reward-free reinforcement learning (RL) becomes a powerful framework for a variety of multi-objective applications, representation learning arises as an effective technique to deal with the curse of dimensionality in reward-free RL. However, the existing algorithms of representation learning in reward-free RL still suffers seriously from high sample complexity, although they are polynomially efficient. In this talk, I will first present a novel representation learning algorithm that we propose for reward-free RL. We show that such an algorithm provably finds near-optimal policy as well as attaining near-accurate system identification via reward-free exploration, with significantly improved sample complexity compared to the best-known result before. I will then present our characterization of the benefit of representation learning in reward-free multitask (a.k.a. meta) RL as well as the benefit of employing the learned representation from upstream to downstream tasks. I will conclude my talk with remarks of future directions. The work to be presented was jointly with Yuan Cheng (USTC), Ruiquan Huang (PSU), Dr. Songtao Feng (OSU), Prof. Jing Yang (PSU), and Prof. Hong Zhang (USTC).<\/p>\n\n\n\n

\"a<\/figure>\n\n\n\n

Bio:<\/strong> Dr. Yingbin Liang is currently a Professor at the Department of Electrical and Computer Engineering at the Ohio State University (OSU), and a core faculty of the Ohio State Translational Data Analytics Institute (TDAI). She also serves as the Deputy Director of the AI-Edge Institute at OSU. Dr. Liang received the Ph.D. degree in Electrical Engineering from the University of Illinois at Urbana-Champaign in 2005, and served on the faculty of University of Hawaii and Syracuse University before she joined OSU. Dr. Liang's research interests include machine learning, optimization, information theory, and statistical signal processing. Dr. Liang received the National Science Foundation CAREER Award and the State of Hawaii Governor Innovation Award in 2009. She also received EURASIP Best Paper Award in 2014.<\/p>\n\n\n\n

Video<\/a><\/p>\n\n\n\n\n\n

<\/p>\n\n\n\n

11\/04\/2022:<\/strong> Player-optimal Stable Regret for Bandit Learning in Matching Markets, Shuai Li<\/p>\n\n\n\n\n\n

Abstract: <\/strong>The problem of matching markets has been studied for a long history in the literature due to its wide range of applications. Finding a stable matching is a common equilibrium objective in this problem. Since market participants are usually uncertain of their preferences, a rich line of recent works study the online setting where one-side participants (players) learn their unknown preferences from iterative interactions with the other side (arms). Most previous works in this line are only able to derive theoretical guarantees for player-pessimal stable regret, which is defined compared with the players\u2019 least-preferred stable matching. <\/p>\n\n\n\n

However, under the pessimal stable matching, players only obtain the least reward among all stable matchings. To maximize players\u2019 profits, the player-optimal stable matching would be the most desirable Though Basu et al. [2021] successfully bring an upper bound for player-optimal stable regret, their result can be exponentially large if players\u2019 preference gap is small. Whether a polynomial guarantee for this regret exists is a significant but still open problem. In this work, we provide a new algorithm and show that the optimal stable regret of each player can be upper bounded by O(K log T \/ \u2206^2) where K is the number of arms, T is the horizon and \u2206 is the players\u2019 minimum preference gap. This result significantly improves previous works which either has a weaker player-pessimal stable matching objective or applies only for markets with special assumptions. When the preferences of participants satisfy some special conditions, our regret upper bound also matches the previously derived lower bound This work is accepted at SODA 2023.<\/p>\n\n\n\n

\"Shuai<\/figure>\n\n\n\n

Bio:<\/strong> Shuai Li is currently an Assistant Professor in the John Hopcroft Center of Shanghai Jiao Tong University. She received PhD degree in Computer Science from the Chinese University of Hong Kong, master degree in Mathematics from University of the Chinese Academy of Sciences and bachelor degree in Mathematics from Zhejiang University. Her research interests include machine learning theory, bandit algorithms and reinforcement learning algorithms. She has published 40+ papers in top machine learning conferences like ICML\/NeurIPS\/AAAI\/IJCAI\/KDD and serves as reviewers in these conferences. She is a recipient of Shanghai Sailing Program 2020 and Google PhD fellowship 2018.<\/p>\n\n\n\n

<\/p>\n\n\n\n

Video<\/p>\n\n\n\n\n\n

<\/p>\n\n\n\n

10\/13\/2022:<\/strong> What Should a Good Deep Neural Network Look Like? Insights from a Layer-Peeled Model and the Law of Equi-Separation, Weijie Su<\/p>\n\n\n\n\n\n

Abstract:<\/strong> In this talk, we will investigate the emergence of geometric patterns in well-trained deep learning models by making use of a layer-peeled model and the law of equi-separation. The former is a nonconvex optimization program that models the last-layer features and weights. We use the model to shed light on the neural collapse phenomenon of Papyan, Han, and Donoho, and to predict a hitherto-unknown phenomenon that we term minority collapse in imbalanced training. This is based on joint work with Cong Fang, Hangfeng He, and Qi Long.<\/p>\n\n\n\n

The law of equi-separation is a pervasive empirical phenomenon that describes how data are separated according to their class membership from the bottom to the top layer in a well-trained neural network. We will show that, through extensive computational experiments, neural networks improve data separation through layers in a simple exponential manner. This law leads to roughly equal ratios of separation that a single layer is able to improve, thereby showing that all layers are created equal. We will conclude the talk by discussing the implications of this law on the interpretation, robustness, and generalization of deep learning, as well as on the inadequacy of some existing approaches toward demystifying deep learning. This is based on joint work with Hangfeng He.<\/p>\n\n\n\n

\"a<\/figure>\n\n\n\n

Bio:<\/strong> Weijie Su is an Associate Professor in the Wharton Statistics and Data Science Department and, by courtesy, in the Department of Computer and Information Science, at the University of Pennsylvania. He is a co-director of Penn Research in Machine Learning. Prior to joining Penn, he received his Ph.D. from Stanford University in 2016 and his bachelor\u2019s degree from Peking University in 2011. His research interests span privacy-preserving data analysis, deep learning theory, optimization, high-dimensional statistics, and mechanism design. He is a recipient of the Stanford Theodore Anderson Dissertation Award in 2016, an NSF CAREER Award in 2019, an Alfred Sloan Research Fellowship in 2020, the SIAM Early Career Prize in Data Science in 2022, and the IMS Peter Gavin Hall Prize in 2022.<\/p>\n\n\n\n

Video<\/a><\/p>\n\n\n\n\n\n

<\/p>\n\n\n\n

09\/22\/2022:<\/strong> On the (Non)smoothness of Neural Network Training, Jingzhao Zhang<\/p>\n\n\n\n\n\n

Abstract:<\/strong> In this talk, we will discuss the following question\u2015why is neural network training non-smooth from an optimization perspective, and how should we analyze convergence for non smooth problems. We start by showing that the non-smoothness is essential to standard neural network training procedures, and that network training converges in an unstable manner. We then provide theoretical models for understanding why optimization in neural network is unstable, and how new definitions of convergence can reconcile theory with practice.<\/p>\n\n\n\n

\"people\"<\/figure>\n\n\n\n

Bio:<\/strong> Jingzhao Zhang is an assistant professor at Tsinghua, IIIS. He graduated from MIT EECS under the supervision of Prof Ali Jadbabaie and Prof Suvrit Sra. His research focuses on providing theoretical justifications and analyses to practical large-scale optimization algorithms. He is also interested in machine learning applications, especially those involving dynamical system formulations.<\/p>\n\n\n\n

Video<\/a><\/p>\n\n\n\n\n\n

<\/p>\n\n\n\n

08\/25\/2022:<\/strong> Local Elasticity of Neural Networks and Its Inspired Theory, Zhun Deng<\/p>\n\n\n\n\n\n

Abstract: <\/strong>In this talk, I will briefly review local elasticity of neural networks proposed by He et al. Then, based on that, I will introduce a new type of stability notion, which can improve over classical stability notions with respect to generalization behavior in certain situations. Specifically, among different notions of stability, uniform stability is arguably the most popular one, which yields exponential generalization bounds. However, uniform stability only considers the worst-case loss change (or so-called sensitivity) by removing a single data point, which is distribution-independent and therefore undesirable. There are many cases that the worst-case sensitivity of the loss is much larger than the average sensitivity taken over the single data point that is removed, especially in some advanced models such as random feature models or neural networks. Many previous works try to mitigate the distribution independent issue by proposing weaker notions of stability, however, they either only yield polynomial bounds or the bounds derived do not vanish as sample size goes to infinity. Given that, we propose locally elastic stability as a weaker and distribution-dependent stability notion, which still yields exponential generalization bounds. We further demonstrate that locally elastic stability implies tighter generalization bounds than those derived based on uniform stability in many situations by revisiting the examples of bounded support vector machines, regularized least square regressions, and stochastic gradient descent.<\/p>\n\n\n\n

\"Zhun<\/figure>\n\n\n\n

Bio:<\/strong> Zhun is a postdoctoral researcher with Toniann Pitassi<\/a> and Richard Zemel<\/a> at Columbia University, and also part of Simons Collaboration on the Theory of Algorithmic Fairness<\/a>. Previously, Zhun got his Ph.D. in Computer Science at Harvard University, advised by Cynthia Dwork<\/a>. His research interests lie at the intersection of theoretical computer science, machine learning, and social science. His work aims to make data science more trustworthy, statistically rigorous, and aligned with societal values. Here is the website: https:\/\/www.zhundeng.org<\/a>.<\/p>\n\n\n\n

Video<\/a><\/p>\n\n\n\n\n\n

<\/p>\n\n\n\n

08\/04\/2022:<\/strong> Toward Understanding Self-Supervised Pre-training, Tengyu Ma<\/p>\n\n\n\n\n\n

Abstract:  <\/strong>AI is undergoing a paradigm shift the rise of models that are pretrained with self-supervised learning and then adapted to a wide range of downstream tasks. Despite the unprecedented empirical success, why and how pretrained models work still largely remains a mystery. This talk will discuss recent works on analyzing contrastive learning, a family of popular self-supervised pretraining methods that learn visual representations\/embeddings of images from unlabeled data. We will develop a framework that views contrastive learning as a parametric version of spectral clustering on a so-called population positive-pair graph. We will also analyze the adaptability of the representations and provide sample complexity bounds. Finally, I will briefly discuss two follow-up works that study self-supervised representations\u2019 performance under imbalanced pretraining datasets and for shifting test distributions.  Joint works with Jeff Z. Haochen, Colin Wei, Kendrick Shen, Robbie Jones, Ananya Kumar, Sang Michael Xie, Adrien Gaidon, and Percy Liang.<\/p>\n\n\n\n

\"Tengyu<\/figure>\n\n\n\n

Bio:<\/strong> Tengyu Ma is an assistant professor of Computer Science and Statistics at Stanford University. He received his Ph.D. from Princeton University and B.E. from Tsinghua University. His research interests include topics in machine learning and algorithms, such as deep learning and its theory, non-convex optimization, deep reinforcement learning, representation learning, and high-dimensional statistics. He is a recipient of the ACM Doctoral Dissertation Award Honorable Mention, the Sloan Fellowship, and NSF CAREER Award.<\/p>\n\n\n\n

Video<\/a> | Slides<\/a><\/p>\n\n\n\n\n\n

<\/p>\n\n\n\n

07\/22\/2022:<\/strong> Unveiling Transformers with LEGO, Sebastien Bubeck<\/p>\n\n\n\n\n\n

Abstract:<\/strong><\/p>\n\n\n\n

The discovery of the transformer architecture was a paradigm shifting event for deep learning. However, these architectures are arguably even harder to understand than say convolutional neural networks. In this work we propose a synthetic task, called LEGO, to probe the inner workings of transformers. We obtain some insights on multi-head attention, the effect of pretraining, as well as overfitting issues. Joint work with Yi Zhang, Arturs Backurs, Ronen Eldan, Suriya Gunasekar, and Tal Wagner.<\/p>\n\n\n\n

\"a<\/figure>\n\n\n\n

Bio: <\/strong>Sebastien Bubeck manages the Machine Learning Foundations team in MSR Redmond. He has worked on multi-armed bandits, convex optimization, online algorithms, and adversarial examples, winning best papers at COLT (2009 and 2016), ALT (2018), and NeurIPS (2018 and 2021). At the moment he is trying to understand Transformers.<\/p>\n\n\n\n

Video<\/a><\/p>\n\n\n\n\n","tab-content":[],"msr_startdate":"2022-07-22","msr_enddate":"2024-01-24","msr_event_time":"","msr_location":"Virtual \/ MSRA","msr_event_link":"","msr_event_recording_link":"","msr_startdate_formatted":"July 22, 2022","msr_register_text":"Watch now","msr_cta_link":"","msr_cta_text":"","msr_cta_bi_name":"","featured_image_thumbnail":"\"banner\"","event_excerpt":"MSR Asia Theory Lecture Series is a forum where we invite researchers around the world to share the latest theoretical advances in big data, artificial intelligence, and related areas. The Lecture series are broadcast live over Teams. If you would like to receive the information about the upcoming talks, please send email \u201cSubscribe to the Lecture Series\u201d to MSRA.TheoryCenter@outlook.com or subscribe to Wechat official account \"\u5fae\u8f6f\u5b66\u672f\u5408\u4f5c\u201c. 8\/21\/2024: Regularization and Optimal Multiclass Learning, Shang-Hua Teng 4\/25\/2024:…","msr_research_lab":[199560],"related-researchers":[{"type":"user_nicename","display_name":"Wei Chen","user_id":34795,"people_section":"Section name 0","alias":"weic"},{"type":"user_nicename","display_name":"Siwei Wang","user_id":42321,"people_section":"Section name 0","alias":"siweiwang"}],"msr_impact_theme":[],"related-academic-programs":[],"related-groups":[802999],"related-projects":[],"related-opportunities":[],"related-publications":[],"related-videos":[],"related-posts":[],"_links":{"self":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-event\/867756"}],"collection":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-event"}],"about":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/types\/msr-event"}],"version-history":[{"count":63,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-event\/867756\/revisions"}],"predecessor-version":[{"id":1083522,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-event\/867756\/revisions\/1083522"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media\/869946"}],"wp:attachment":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media?parent=867756"}],"wp:term":[{"taxonomy":"msr-research-area","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/research-area?post=867756"},{"taxonomy":"msr-region","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-region?post=867756"},{"taxonomy":"msr-event-type","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-event-type?post=867756"},{"taxonomy":"msr-video-type","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-video-type?post=867756"},{"taxonomy":"msr-locale","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-locale?post=867756"},{"taxonomy":"msr-program-audience","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-program-audience?post=867756"},{"taxonomy":"msr-post-option","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-post-option?post=867756"},{"taxonomy":"msr-impact-theme","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-impact-theme?post=867756"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}