{"id":200054,"date":"2015-04-08T04:30:39","date_gmt":"2015-04-08T04:30:39","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/events\/msr-india-summer-school-2015-on-machine-learning\/"},"modified":"2022-08-08T11:20:37","modified_gmt":"2022-08-08T18:20:37","slug":"msr-india-summer-school-2015-on-machine-learning","status":"publish","type":"msr-event","link":"https:\/\/www.microsoft.com\/en-us\/research\/event\/msr-india-summer-school-2015-on-machine-learning\/","title":{"rendered":"MSR India Summer School 2015 on Machine Learning"},"content":{"rendered":"\n\n\n\n\n

The MSR India Summer School series, held in collaboration with the Indian Institute of Science, consists of lectures in a chosen area by leading experts from around the world. The aim is to introduce students and researchers to important new areas and the latest results and to provide a forum for Indian and international researchers to interact. The 2015 Summer School will be held between June 15 \u2013 26 at the Indian Institute of Science, Bangalore in the area of Machine Learning.<\/p>\n\n\n\n

Understanding data has become essential for almost all modern applications. This data intensive nature of applications have spurred a great deal of research in Machine Learning and several related areas. The 2015 MSR Summer school focused on Machine Learning and its application to Big Data. In particular, the lectures covered several aspects of supervised\/unsupervised learning in high-dimensions and with large number of data points.<\/p>\n\n\n\n

The School addressed both theoretical as well as practical aspects of the chosen area and was targeted at research scholars, faculty members, masters and senior undergraduate students.<\/p>\n\n\n\n

The lectures were designed to offer self-contained introductions to chosen topics, leading up to some open problems for research.<\/p>\n\n\n\n

There was also a day long Azure hackathon as part of the summer school agenda.<\/p>\n\n\n\n

For any further information \/clarification, please write to indiamrc@microsoft.com<\/a>.<\/p>\n\n\n\n\n\n\n\n

Chih-Jen Lin<\/em><\/p>\n\n\n\n

Linear and kernel methods are important machine learning techniques for data classification. Popular examples include support vector machines (SVM) and logistic regression. We begin with an introduction on this subject by deriving their optimization problems through different aspects. This discussion is useful because many people are confused about the relationships between, for example, SVM and logistic regression. We then move to investigate techniques for solving optimization problems for linear and kernel classification. In particular, we show details of two representative settings: coordinate descent methods and Newton methods. Recently, extending these optimization techniques to handle big data in either multi-core or distributed environments is a very important research direction. We present some promising results and discuss future challenges.<\/p>\n\n\n\n\n\n

Prateek Jain<\/em><\/p>\n\n\n\n

Typical high-dimensional learning problems such as sparse regression, low-rank matrix completion, robust PCA etc can be solved using projections onto non-convex sets. However, providing theoretical guarantees for such methods is difficult due to the non-convexity in projections. In this talk, we will discuss some of our recent results that show that non-convex projections based methods can be used to solve several important problems in this area such as: a) sparse regression, b) low-rank matrix completion, c) robust PCA.<\/p>\n\n\n\n

In this talk, we will give an overview of the state-of-the-art for these problems and also discuss how simple non-convex techniques can significantly outperform state-of-the-art convex relaxation based techniques and provide solid theoretical results as well. For example, for robust PCA, we provide first provable algorithm with time complexity O(n^2 r) which matches the time complexity of normal SVD and is faster than the usual nuclear+L_1-regularization methods that incur O(n^3) time complexity. This talk is based on joint works with Ambuj Tewari, Purushottam Kar, Praneeth Netrapalli, Animashree Anandkumar, U N Niranjan, and Sujay Sanghavi.<\/p>\n\n\n\n\n\n

Stefanie Jegelka<\/em><\/p>\n\n\n\n

Many problems in machine learning that involve discrete structures or subset selection may be phrased in the language of submodular set functions. The property of submodularity, also referred to as a \u2018discrete analog of convexity\u2019, expresses the notion of diminishing marginal returns, and captures combinatorial versions of rank and dependence. Submodular functions occur in a variety of areas including graph theory, information theory, combinatorial optimization, stochastic processes and game theory. In machine learning, they emerge in different forms as the potential functions of graphical models, as the utility functions in active learning and sensing, in models of diversity, in structured sparse estimation or network inference. The lectures will give an introduction to the theory of submodular functions, some applications in machine learning and algorithms for minimizing and maximizing submodular functions that exploit ties to both convexity and concavity.<\/p>\n\n\n\n\n\n

John Lafferty<\/em><\/p>\n\n\n\n

We present some nonparametric methods for graphical modeling. In the discrete case, where the data are binary or drawn from a finite alphabet, Markov random fields are the standard model. The Gaussian graphical model is the usual parametric model for continuous data, but it makes distributional assumptions that are often unrealistic. We discuss several approaches to building more flexible graphical models. One allows arbitrary graphs and a nonparametric extension of the Gaussian;the other uses kernel density estimation and restricts the graphs to trees and forests. Other approaches combine these two techniques.<\/p>\n\n\n\n\n\n

John Lafferty<\/em><\/p>\n\n\n\n

In massive data analysis, statistical estimation needs to be carried out with close attention to computational resources \u2014 compute cycles, communication bandwidth and storage capacity. Yet little is presently known about the fundamental tradeoffs between statistical and computational efficiency. We give a brief survey of past and more recent work in this direction. We then present new work that revisits classical linear and nonparametric estimation theory from a computational perspective, formulating an extension to classical results in minimax analysis in the setting of rate distortion theory. We also present algorithms for trading off estimation accuracy for computational speed in linear and nonparametric regression.<\/p>\n\n\n\n\n\n

John Lafferty<\/em><\/p>\n\n\n\n

Estimating high dimensional functions under weak assumptions is a central challenge in statistical machine learning. We give a survey of results on variable selection for high dimensional nonparametric regression. We then present new results in nonparametric estimation under shape constraints. We first consider the problem of estimating a convex function of several variables, and develop a screening procedure to identify irrelevant variables. We then discuss extensions of these ideas and open problems for future research.<\/p>\n\n\n\n\n\n

B Ravindran<\/em><\/p>\n\n\n\n

Reinforcement Learning (RL) is a popular paradigm for trial-and-error learning enjoying renewed popularity due to Google Deepmind\u2019s Atari playing engine. Though there has been much interest in the field for close to 3 decades RL methods have not had many large scale deployments. In this talk I will introduce several approaches adopted by the RL community for scaling up algorithms. I will go over the fundamentals of hierarchical reinforcement learning and value function approximation methods. In the second part of the talk I will briefly cover methods for automatically discovering spatio-temporal abstractions in RL.<\/p>\n\n\n\n\n\n

Sundararajan Sellamanickam<\/em><\/p>\n\n\n\n

Distributed machine learning is an important area that has been receiving considerable attention from academic and industrial communities, as data is growing in unprecedented rate. In the first part of the talk, we review several popular approaches that are proposed\/used to learn classifier models in the big data scenario. With commodity clusters priced on system configurations becoming popular, machine learning algorithms have to be aware of the computation and communication costs involved in order to be cost effective and efficient. In the second part of the talk, we focus on methods that address this problem; in particular, considering different data distribution settings (e.g., example and feature partitions), we present efficient distributed learning algorithms that trade-off computation and communication costs.<\/p>\n\n\n\n\n\n

Sanjoy Dasgupta<\/em><\/p>\n\n\n\n

The \u201cactive learning\u201d model is motivated by scenarios in which it is easy to amass vast quantities of unlabeled data (images and videos off the web, speech signals from microphone recordings, and so on) but costly to obtain their labels. Like supervised learning, the goal is ultimately to learn a classifier. But the labels of training points are hidden, and each of them can be revealed only at a cost. The idea is to query just a few labels that are especially informative about the decision boundary, and thereby to obtain an accurate classifier at significantly lower cost than regular supervised learning.<\/p>\n\n\n\n

There are two distinct ways of conceptualizing active learning, which lead to rather different querying strategies. The first treats active learning as an efficient search through a hypothesis space of candidates, while the second has to do with exploiting cluster or neighborhood structure in data. This talk will show how each view leads to active learning algorithms that can be made efficient and practical, and have provable label complexity bounds that are in some cases exponentially lower than for supervised learning.<\/p>\n\n\n\n\n\n

Sanjoy Dasgupta<\/em><\/p>\n\n\n\n

This tutorial will focus on entropy, exponential families, and information projection. We\u2019ll start by seeing the sense in which entropy is the only reasonable definition of randomness. We will then use entropy to motivate exponential families of distributions \u2014 which include the ubiquitous Gaussian, Poisson, and Binomial distributions, but also very general graphical models. The task of fitting such a distribution to data is a convex optimization problem with a geometric interpretation as an \u201cinformation projection\u201d: the projection of a prior distribution onto a linear subspace (defined by the data) so as to minimize a particular information-theoretic distance measure. This projection operation, which is more familiar in other guises, is a core optimization task in machine learning and statistics. We\u2019ll study the geometry of this problem and discuss algorithms for it.<\/p>\n\n\n\n\n\n

Sanjoy Dasgupta<\/em><\/p>\n\n\n\n

What information does the clustering of a finite data set reveal about the underlying distribution from which the data were sampled? This basic question has proved elusive even for the most widely-used clustering procedures. One natural criterion is to seek clusters that converge (as the data set grows) to regions of high density. When all possible density levels are considered, this is a hierarchical clustering problem where the sought limit is called the \u201ccluster tree\u201d.<\/p>\n\n\n\n

This talk will describe two simple algorithms for estimating this tree that implicitly construct a multiscale hierarchy of near-neighbor graphs on the data points. We\u2019ll show that these procedure are consistent, and give rates of convergence using a percolation argument that also gives insight into how neighborhood graphs should be constructed.<\/p>\n\n\n\n\n\n

Chiranjib Bhattacharya<\/em><\/p>\n\n\n\n

Topic models attempt to discover themes, or Topics, from large collection of documents. Discovering themes from a document corpus is an important problem with a variety of applications in Web-search, Corpus Browsing etc.<\/p>\n\n\n\n

In this two part tutorial, we will begin by introducing neccessary background in understanding Topic models, mainly focussing on EM algorithm and Variational Inference. In the second part of the talk we will review several models starting with Latent Semantic Indexing(LSI), proposed in 1988, to the more recent and now state of the art Probabilistic Topic models. Towards the end of the talk we will discuss recent theoretical results on emph{provable} topic models.<\/p>\n\n\n\n\n\n

Amit Deshpande<\/em><\/p>\n\n\n\n

Subset selection problem means finding a small subset of given data that maximizes diversity or information content or some other submodular function depending on the context. This definition can be suitably modified in each context, and has a wide range of interesting applications like feature selection, sensor placement, document summarization, diversification of search. I\u2019ll review different theoretical attempts to capture this notion, the algorithmic ideas, practical applications, and their impact in return on basic research in graph theory, linear algebra, probability.<\/p>\n\n\n\n\n\n

Manish Gupta<\/em><\/p>\n\n\n\n

Entity mining is one of the hot topics in the area of web mining and information retrieval. In this talk, I will discuss in brief three interesting entity linking problems which apply various machine learning techniques: Entity linking, Dominant Entity Identification, and Cricket Linking. Entity linking is the problem of linking a mention phrase from a document to an entity in the knowledge base. Dominant entity identification is the problem of finding whether an entity e is the dominant entity for a page p. Cricket linking is the problem of linking event mentions from cricket match reports to a set of balls in cricket commentaries. In my talk, I will discuss these problems in detail, and will present interesting solutions to them.<\/p>\n\n\n\n\n\n

Aditya Gopalan<\/em><\/p>\n\n\n\n

The ability to make continual, accurate decisions based on evolving data is key in many of today\u2019s data-driven intelligent systems. This tutorial-style talk presents an introduction to the modern study of sequential learning and decision making under uncertainty. The broad objective is to cover modeling frameworks for online prediction and learning, explore algorithms for decision making, and gain an understanding of their performance. Specifically, we will look at multi-armed bandits \u2014 models of decision making that capture the explore-vs-exploit tradeoff in learning, regret minimization, non-stochastic or adversarial online learning, and online convex optimization. Time permitting, we will discuss new directions and frontiers in the area of sequential decision making.<\/p>\n\n\n\n\n\n

Aditya Gopalan<\/em><\/p>\n\n\n\n

We consider Reinforcement Learning (RL) in parameterized bandits or more generally Markov Decision Processes (MDPs), where the parameterization can induce correlation across transition probabilities and\/or rewards. Consequently, observing a particular state transition might yield useful information about other, unobserved, parts of the MDP. In this setting, Posterior sampling a.k.a. Thompson sampling \u2013 a randomized, Bayesian-inspired algorithm originally developed for the simpler multiarmed bandit \u2013 becomes a natural candidate for a learning algorithm. We develop a version of Thompson sampling for parameterized RL problems, and derive the first known frequentist regret bounds for fairly general parameter spaces and priors. Under mild conditions on the prior used in Thompson sampling, the regret can be shown to scale logarithmically in time and with high probability. The result holds for priors without any additional, specific closed-form structure such as conjugate or product-form priors. Moreover, the constant factor in the logarithmic scaling exposes the \u201cinformation complexity\u201d of learning the MDP, in terms of the structure of the parameter space. We also report numerical results for the algorithm on a parameterized queueing system, with a large number of states (queue occupancy) but only a small number of uncertain parameters (arrival\/service rates).<\/p>\n\n\n\n

Joint work with Shie Mannor and Yishay Mansour.<\/p>\n\n\n\n\n\n

Praneeth Netrapalli<\/em><\/p>\n\n\n\n

In this lecture, we will illustrate a novel technique due to Erdos et al. (2011) which can be used to obtain bounds on eigenvector perturbation in the ell_{infty} norm. Standard techniques give us optimal bounds only for perturbation in the ell_2 norm. We will further use this technique to propose and analyze a new non-convex algorithm for robust PCA, where the task is to recover a low-rank matrix from sparse corruptions that are of unknown value and support. In the deterministic error setting, our method achieves exact recovery under the same conditions that are required by existing methods (which are based on convex optimization) but is much faster.<\/p>\n\n\n\n\n\n

Purushottam Kar<\/em><\/p>\n\n\n\n

The aim of this tutorial is to introduce tools and techniques that are used to analyze machine learning algorithms in statistical settings. Our focus will be on learning problems such as classification, regression, and ranking. We will look at concentration inequalities and other commonly used techniques such as uniform convergence and symmetrization, and use them to prove learning theoretic guarantees for algorithms in these settings.<\/p>\n\n\n\n

The talk will be largely self-contained. However, it would help if the audience could brush up basic probability and statistics concepts such as random variables, events, probability of events, Boole\u2019s inequality etc. There are several good resources for these online and I do not wish to recommend one over the other. However, a couple of nice resources are given below:<\/p>\n\n\n\n

  1. https:\/\/www.khanacademy.org\/math\/probability<\/li>
  2. http:\/\/ocw.mit.edu\/courses\/mathematics\/18-05-introduction-to-probability-and-statistics-spring-2014\/<\/li>
  3. https:\/\/en.wikipedia.org\/wiki\/Boole\u2019s_inequality<\/li><\/ol>\n\n\n\n\n\n\n\n
    Aditya Gopalan (opens in new tab)<\/span><\/a><\/div>\n\n\n\n
    Amit Deshpande<\/a><\/div>\n\n\n\n
    Chih-Jeh Lin (opens in new tab)<\/span><\/a><\/div>\n\n\n\n
    Chiranjib Bhattacharya (opens in new tab)<\/span><\/a><\/div>\n\n\n\n
    John Lafferty (opens in new tab)<\/span><\/a><\/div>\n\n\n\n
    Manish Gupta<\/a><\/div>\n\n\n\n
    Prateek Jain<\/a><\/div>\n\n\n\n
    Praneeth Netrapalli<\/a><\/div>\n\n\n\n
    Purushottam Kar<\/div>\n\n\n\n
    B Ravindran (opens in new tab)<\/span><\/a><\/div>\n\n\n\n
    Sanjoy Dasgupta (opens in new tab)<\/span><\/a><\/div>\n\n\n\n
    Sriram Rajamani<\/a><\/div>\n\n\n\n
    Stefanie Jegelka (opens in new tab)<\/span><\/a><\/div>\n\n\n\n
    Sundararajan Sellamanickam<\/div>\n\n\n\n
    Suvrit Sra (opens in new tab)<\/span><\/a><\/div>\n\n\n\n
     <\/div>\n\n\n\n\n\n
    Aditya Gopalan (opens in new tab)<\/span><\/a>, Indian Institute of Science<\/div>\n\n\n\n
     <\/div>\n\n\n\n
    Prateek Jain<\/a>, Microsoft Research<\/div>\n\n\n\n
     <\/div>\n\n\n\n
    Manik Varma<\/a>, Microsoft Research<\/div>\n\n\n\n
     <\/div>\n\n\n\n
    Sundarajan Sellamanickam, Microsoft Research<\/div>\n\n\n","protected":false},"excerpt":{"rendered":"

    The MSR India Summer School series, held in collaboration with the Indian Institute of Science, consists of lectures in a chosen area by leading experts from around the world. The aim is to introduce students and researchers to important new areas and the latest results and to provide a forum for Indian and international researchers […]<\/p>\n","protected":false},"featured_media":0,"template":"","meta":{"msr-url-field":"","msr-podcast-episode":"","msrModifiedDate":"","msrModifiedDateEnabled":false,"ep_exclude_from_search":false,"msr_startdate":"2015-06-15","msr_enddate":"2015-06-26","msr_location":"Indian Institute of Science, Bangalore India","msr_expirationdate":"","msr_event_recording_link":"","msr_event_link":"","msr_event_link_redirect":false,"msr_event_time":"","msr_hide_region":false,"msr_private_event":true,"footnotes":""},"research-area":[13556],"msr-region":[],"msr-event-type":[],"msr-video-type":[],"msr-locale":[268875],"msr-program-audience":[],"msr-post-option":[],"msr-impact-theme":[],"class_list":["post-200054","msr-event","type-msr-event","status-publish","hentry","msr-research-area-artificial-intelligence","msr-locale-en_us"],"msr_about":"\n\n\n\n\n

    The MSR India Summer School series, held in collaboration with the Indian Institute of Science, consists of lectures in a chosen area by leading experts from around the world. The aim is to introduce students and researchers to important new areas and the latest results and to provide a forum for Indian and international researchers to interact. The 2015 Summer School will be held between June 15 \u2013 26 at the Indian Institute of Science, Bangalore in the area of Machine Learning.<\/p>\n\n\n\n

    Understanding data has become essential for almost all modern applications. This data intensive nature of applications have spurred a great deal of research in Machine Learning and several related areas. The 2015 MSR Summer school focused on Machine Learning and its application to Big Data. In particular, the lectures covered several aspects of supervised\/unsupervised learning in high-dimensions and with large number of data points.<\/p>\n\n\n\n

    The School addressed both theoretical as well as practical aspects of the chosen area and was targeted at research scholars, faculty members, masters and senior undergraduate students.<\/p>\n\n\n\n

    The lectures were designed to offer self-contained introductions to chosen topics, leading up to some open problems for research.<\/p>\n\n\n\n

    There was also a day long Azure hackathon as part of the summer school agenda.<\/p>\n\n\n\n

    For any further information \/clarification, please write to indiamrc@microsoft.com<\/a>.<\/p>\n\n\n\n\n\n\n\n

    Chih-Jen Lin<\/em><\/p>\n\n\n\n

    Linear and kernel methods are important machine learning techniques for data classification. Popular examples include support vector machines (SVM) and logistic regression. We begin with an introduction on this subject by deriving their optimization problems through different aspects. This discussion is useful because many people are confused about the relationships between, for example, SVM and logistic regression. We then move to investigate techniques for solving optimization problems for linear and kernel classification. In particular, we show details of two representative settings: coordinate descent methods and Newton methods. Recently, extending these optimization techniques to handle big data in either multi-core or distributed environments is a very important research direction. We present some promising results and discuss future challenges.<\/p>\n\n\n\n\n\n

    Prateek Jain<\/em><\/p>\n\n\n\n

    Typical high-dimensional learning problems such as sparse regression, low-rank matrix completion, robust PCA etc can be solved using projections onto non-convex sets. However, providing theoretical guarantees for such methods is difficult due to the non-convexity in projections. In this talk, we will discuss some of our recent results that show that non-convex projections based methods can be used to solve several important problems in this area such as: a) sparse regression, b) low-rank matrix completion, c) robust PCA.<\/p>\n\n\n\n

    In this talk, we will give an overview of the state-of-the-art for these problems and also discuss how simple non-convex techniques can significantly outperform state-of-the-art convex relaxation based techniques and provide solid theoretical results as well. For example, for robust PCA, we provide first provable algorithm with time complexity O(n^2 r) which matches the time complexity of normal SVD and is faster than the usual nuclear+L_1-regularization methods that incur O(n^3) time complexity. This talk is based on joint works with Ambuj Tewari, Purushottam Kar, Praneeth Netrapalli, Animashree Anandkumar, U N Niranjan, and Sujay Sanghavi.<\/p>\n\n\n\n\n\n

    Stefanie Jegelka<\/em><\/p>\n\n\n\n

    Many problems in machine learning that involve discrete structures or subset selection may be phrased in the language of submodular set functions. The property of submodularity, also referred to as a \u2018discrete analog of convexity\u2019, expresses the notion of diminishing marginal returns, and captures combinatorial versions of rank and dependence. Submodular functions occur in a variety of areas including graph theory, information theory, combinatorial optimization, stochastic processes and game theory. In machine learning, they emerge in different forms as the potential functions of graphical models, as the utility functions in active learning and sensing, in models of diversity, in structured sparse estimation or network inference. The lectures will give an introduction to the theory of submodular functions, some applications in machine learning and algorithms for minimizing and maximizing submodular functions that exploit ties to both convexity and concavity.<\/p>\n\n\n\n\n\n

    John Lafferty<\/em><\/p>\n\n\n\n

    We present some nonparametric methods for graphical modeling. In the discrete case, where the data are binary or drawn from a finite alphabet, Markov random fields are the standard model. The Gaussian graphical model is the usual parametric model for continuous data, but it makes distributional assumptions that are often unrealistic. We discuss several approaches to building more flexible graphical models. One allows arbitrary graphs and a nonparametric extension of the Gaussian;the other uses kernel density estimation and restricts the graphs to trees and forests. Other approaches combine these two techniques.<\/p>\n\n\n\n\n\n

    John Lafferty<\/em><\/p>\n\n\n\n

    In massive data analysis, statistical estimation needs to be carried out with close attention to computational resources \u2014 compute cycles, communication bandwidth and storage capacity. Yet little is presently known about the fundamental tradeoffs between statistical and computational efficiency. We give a brief survey of past and more recent work in this direction. We then present new work that revisits classical linear and nonparametric estimation theory from a computational perspective, formulating an extension to classical results in minimax analysis in the setting of rate distortion theory. We also present algorithms for trading off estimation accuracy for computational speed in linear and nonparametric regression.<\/p>\n\n\n\n\n\n

    John Lafferty<\/em><\/p>\n\n\n\n

    Estimating high dimensional functions under weak assumptions is a central challenge in statistical machine learning. We give a survey of results on variable selection for high dimensional nonparametric regression. We then present new results in nonparametric estimation under shape constraints. We first consider the problem of estimating a convex function of several variables, and develop a screening procedure to identify irrelevant variables. We then discuss extensions of these ideas and open problems for future research.<\/p>\n\n\n\n\n\n

    B Ravindran<\/em><\/p>\n\n\n\n

    Reinforcement Learning (RL) is a popular paradigm for trial-and-error learning enjoying renewed popularity due to Google Deepmind\u2019s Atari playing engine. Though there has been much interest in the field for close to 3 decades RL methods have not had many large scale deployments. In this talk I will introduce several approaches adopted by the RL community for scaling up algorithms. I will go over the fundamentals of hierarchical reinforcement learning and value function approximation methods. In the second part of the talk I will briefly cover methods for automatically discovering spatio-temporal abstractions in RL.<\/p>\n\n\n\n\n\n

    Sundararajan Sellamanickam<\/em><\/p>\n\n\n\n

    Distributed machine learning is an important area that has been receiving considerable attention from academic and industrial communities, as data is growing in unprecedented rate. In the first part of the talk, we review several popular approaches that are proposed\/used to learn classifier models in the big data scenario. With commodity clusters priced on system configurations becoming popular, machine learning algorithms have to be aware of the computation and communication costs involved in order to be cost effective and efficient. In the second part of the talk, we focus on methods that address this problem; in particular, considering different data distribution settings (e.g., example and feature partitions), we present efficient distributed learning algorithms that trade-off computation and communication costs.<\/p>\n\n\n\n\n\n

    Sanjoy Dasgupta<\/em><\/p>\n\n\n\n

    The \u201cactive learning\u201d model is motivated by scenarios in which it is easy to amass vast quantities of unlabeled data (images and videos off the web, speech signals from microphone recordings, and so on) but costly to obtain their labels. Like supervised learning, the goal is ultimately to learn a classifier. But the labels of training points are hidden, and each of them can be revealed only at a cost. The idea is to query just a few labels that are especially informative about the decision boundary, and thereby to obtain an accurate classifier at significantly lower cost than regular supervised learning.<\/p>\n\n\n\n

    There are two distinct ways of conceptualizing active learning, which lead to rather different querying strategies. The first treats active learning as an efficient search through a hypothesis space of candidates, while the second has to do with exploiting cluster or neighborhood structure in data. This talk will show how each view leads to active learning algorithms that can be made efficient and practical, and have provable label complexity bounds that are in some cases exponentially lower than for supervised learning.<\/p>\n\n\n\n\n\n

    Sanjoy Dasgupta<\/em><\/p>\n\n\n\n

    This tutorial will focus on entropy, exponential families, and information projection. We\u2019ll start by seeing the sense in which entropy is the only reasonable definition of randomness. We will then use entropy to motivate exponential families of distributions \u2014 which include the ubiquitous Gaussian, Poisson, and Binomial distributions, but also very general graphical models. The task of fitting such a distribution to data is a convex optimization problem with a geometric interpretation as an \u201cinformation projection\u201d: the projection of a prior distribution onto a linear subspace (defined by the data) so as to minimize a particular information-theoretic distance measure. This projection operation, which is more familiar in other guises, is a core optimization task in machine learning and statistics. We\u2019ll study the geometry of this problem and discuss algorithms for it.<\/p>\n\n\n\n\n\n

    Sanjoy Dasgupta<\/em><\/p>\n\n\n\n

    What information does the clustering of a finite data set reveal about the underlying distribution from which the data were sampled? This basic question has proved elusive even for the most widely-used clustering procedures. One natural criterion is to seek clusters that converge (as the data set grows) to regions of high density. When all possible density levels are considered, this is a hierarchical clustering problem where the sought limit is called the \u201ccluster tree\u201d.<\/p>\n\n\n\n

    This talk will describe two simple algorithms for estimating this tree that implicitly construct a multiscale hierarchy of near-neighbor graphs on the data points. We\u2019ll show that these procedure are consistent, and give rates of convergence using a percolation argument that also gives insight into how neighborhood graphs should be constructed.<\/p>\n\n\n\n\n\n

    Chiranjib Bhattacharya<\/em><\/p>\n\n\n\n

    Topic models attempt to discover themes, or Topics, from large collection of documents. Discovering themes from a document corpus is an important problem with a variety of applications in Web-search, Corpus Browsing etc.<\/p>\n\n\n\n

    In this two part tutorial, we will begin by introducing neccessary background in understanding Topic models, mainly focussing on EM algorithm and Variational Inference. In the second part of the talk we will review several models starting with Latent Semantic Indexing(LSI), proposed in 1988, to the more recent and now state of the art Probabilistic Topic models. Towards the end of the talk we will discuss recent theoretical results on emph{provable} topic models.<\/p>\n\n\n\n\n\n

    Amit Deshpande<\/em><\/p>\n\n\n\n

    Subset selection problem means finding a small subset of given data that maximizes diversity or information content or some other submodular function depending on the context. This definition can be suitably modified in each context, and has a wide range of interesting applications like feature selection, sensor placement, document summarization, diversification of search. I\u2019ll review different theoretical attempts to capture this notion, the algorithmic ideas, practical applications, and their impact in return on basic research in graph theory, linear algebra, probability.<\/p>\n\n\n\n\n\n

    Manish Gupta<\/em><\/p>\n\n\n\n

    Entity mining is one of the hot topics in the area of web mining and information retrieval. In this talk, I will discuss in brief three interesting entity linking problems which apply various machine learning techniques: Entity linking, Dominant Entity Identification, and Cricket Linking. Entity linking is the problem of linking a mention phrase from a document to an entity in the knowledge base. Dominant entity identification is the problem of finding whether an entity e is the dominant entity for a page p. Cricket linking is the problem of linking event mentions from cricket match reports to a set of balls in cricket commentaries. In my talk, I will discuss these problems in detail, and will present interesting solutions to them.<\/p>\n\n\n\n\n\n

    Aditya Gopalan<\/em><\/p>\n\n\n\n

    The ability to make continual, accurate decisions based on evolving data is key in many of today\u2019s data-driven intelligent systems. This tutorial-style talk presents an introduction to the modern study of sequential learning and decision making under uncertainty. The broad objective is to cover modeling frameworks for online prediction and learning, explore algorithms for decision making, and gain an understanding of their performance. Specifically, we will look at multi-armed bandits \u2014 models of decision making that capture the explore-vs-exploit tradeoff in learning, regret minimization, non-stochastic or adversarial online learning, and online convex optimization. Time permitting, we will discuss new directions and frontiers in the area of sequential decision making.<\/p>\n\n\n\n\n\n

    Aditya Gopalan<\/em><\/p>\n\n\n\n

    We consider Reinforcement Learning (RL) in parameterized bandits or more generally Markov Decision Processes (MDPs), where the parameterization can induce correlation across transition probabilities and\/or rewards. Consequently, observing a particular state transition might yield useful information about other, unobserved, parts of the MDP. In this setting, Posterior sampling a.k.a. Thompson sampling \u2013 a randomized, Bayesian-inspired algorithm originally developed for the simpler multiarmed bandit \u2013 becomes a natural candidate for a learning algorithm. We develop a version of Thompson sampling for parameterized RL problems, and derive the first known frequentist regret bounds for fairly general parameter spaces and priors. Under mild conditions on the prior used in Thompson sampling, the regret can be shown to scale logarithmically in time and with high probability. The result holds for priors without any additional, specific closed-form structure such as conjugate or product-form priors. Moreover, the constant factor in the logarithmic scaling exposes the \u201cinformation complexity\u201d of learning the MDP, in terms of the structure of the parameter space. We also report numerical results for the algorithm on a parameterized queueing system, with a large number of states (queue occupancy) but only a small number of uncertain parameters (arrival\/service rates).<\/p>\n\n\n\n

    Joint work with Shie Mannor and Yishay Mansour.<\/p>\n\n\n\n\n\n

    Praneeth Netrapalli<\/em><\/p>\n\n\n\n

    In this lecture, we will illustrate a novel technique due to Erdos et al. (2011) which can be used to obtain bounds on eigenvector perturbation in the ell_{infty} norm. Standard techniques give us optimal bounds only for perturbation in the ell_2 norm. We will further use this technique to propose and analyze a new non-convex algorithm for robust PCA, where the task is to recover a low-rank matrix from sparse corruptions that are of unknown value and support. In the deterministic error setting, our method achieves exact recovery under the same conditions that are required by existing methods (which are based on convex optimization) but is much faster.<\/p>\n\n\n\n\n\n

    Purushottam Kar<\/em><\/p>\n\n\n\n

    The aim of this tutorial is to introduce tools and techniques that are used to analyze machine learning algorithms in statistical settings. Our focus will be on learning problems such as classification, regression, and ranking. We will look at concentration inequalities and other commonly used techniques such as uniform convergence and symmetrization, and use them to prove learning theoretic guarantees for algorithms in these settings.<\/p>\n\n\n\n

    The talk will be largely self-contained. However, it would help if the audience could brush up basic probability and statistics concepts such as random variables, events, probability of events, Boole\u2019s inequality etc. There are several good resources for these online and I do not wish to recommend one over the other. However, a couple of nice resources are given below:<\/p>\n\n\n\n

    1. https:\/\/www.khanacademy.org\/math\/probability<\/li>
    2. http:\/\/ocw.mit.edu\/courses\/mathematics\/18-05-introduction-to-probability-and-statistics-spring-2014\/<\/li>
    3. https:\/\/en.wikipedia.org\/wiki\/Boole\u2019s_inequality<\/li><\/ol>\n\n\n\n\n\n\n\n
      Aditya Gopalan<\/a><\/div>\n\n\n\n
      Amit Deshpande<\/a><\/div>\n\n\n\n
      Chih-Jeh Lin<\/a><\/div>\n\n\n\n
      Chiranjib Bhattacharya<\/a><\/div>\n\n\n\n
      John Lafferty<\/a><\/div>\n\n\n\n
      Manish Gupta<\/a><\/div>\n\n\n\n
      Prateek Jain<\/a><\/div>\n\n\n\n
      Praneeth Netrapalli<\/a><\/div>\n\n\n\n
      Purushottam Kar<\/div>\n\n\n\n
      B Ravindran<\/a><\/div>\n\n\n\n
      Sanjoy Dasgupta<\/a><\/div>\n\n\n\n
      Sriram Rajamani<\/a><\/div>\n\n\n\n
      Stefanie Jegelka<\/a><\/div>\n\n\n\n
      Sundararajan Sellamanickam<\/div>\n\n\n\n
      Suvrit Sra<\/a><\/div>\n\n\n\n
       <\/div>\n\n\n\n\n\n
      Aditya Gopalan<\/a>, Indian Institute of Science<\/div>\n\n\n\n
       <\/div>\n\n\n\n
      Prateek Jain<\/a>, Microsoft Research<\/div>\n\n\n\n
       <\/div>\n\n\n\n
      Manik Varma<\/a>, Microsoft Research<\/div>\n\n\n\n
       <\/div>\n\n\n\n
      Sundarajan Sellamanickam, Microsoft Research<\/div>\n\n\n","tab-content":[{"id":0,"name":"Summary","content":"The MSR India Summer School series, held in collaboration with the Indian Institute of Science, consists of lectures in a chosen area by leading experts from around the world. The aim is to introduce students and researchers to important new areas and the latest results and to provide a forum for Indian and international researchers to interact. The 2015 Summer School will be held between June 15 - 26 at the Indian Institute of Science, Bangalore in the area of Machine Learning.\r\n\r\nUnderstanding data has become essential for almost all modern applications. This data intensive nature of applications have spurred a great deal of research in Machine Learning and several related areas. The 2015 MSR Summer school focused on Machine Learning and its application to Big Data. In particular, the lectures covered several aspects of supervised\/unsupervised learning in high-dimensions and with large number of data points.\r\n\r\nThe School addressed both theoretical as well as practical aspects of the chosen area and was targeted at research scholars, faculty members, masters and senior undergraduate students.\r\n\r\nThe lectures were designed to offer self-contained introductions to chosen topics, leading up to some open problems for research.\r\n\r\nThere was also a day long Azure hackathon as part of the summer school agenda.\r\n\r\nFor any further information \/clarification, please write to indiamrc@microsoft.com<\/a>."},{"id":1,"name":"Abstracts","content":"[accordion]\r\n\r\n[panel header=\"Large-scale Linear and Kernel Classification\"]Chih-Jen Lin<\/em>\r\n\r\nLinear and kernel methods are important machine learning techniques for data classification. Popular examples include support vector machines (SVM) and logistic regression. We begin with an introduction on this subject by deriving their optimization problems through different aspects. This discussion is useful because many people are confused about the relationships between, for example, SVM and logistic regression. We then move to investigate techniques for solving optimization problems for linear and kernel classification. In particular, we show details of two representative settings: coordinate descent methods and Newton methods. Recently, extending these optimization techniques to handle big data in either multi-core or distributed environments is a very important research direction. We present some promising results and discuss future challenges.\r\n[\/panel]\r\n\r\n[panel header=\"Provable Non-convex Projections for High-dimensional Learning Problems\"]Prateek Jain<\/em>\r\n\r\nTypical high-dimensional learning problems such as sparse regression, low-rank matrix completion, robust PCA etc can be solved using projections onto non-convex sets. However, providing theoretical guarantees for such methods is difficult due to the non-convexity in projections. In this talk, we will discuss some of our recent results that show that non-convex projections based methods can be used to solve several important problems in this area such as: a) sparse regression, b) low-rank matrix completion, c) robust PCA. \r\n\r\nIn this talk, we will give an overview of the state-of-the-art for these problems and also discuss how simple non-convex techniques can significantly outperform state-of-the-art convex relaxation based techniques and provide solid theoretical results as well. For example, for robust PCA, we provide first provable algorithm with time complexity O(n^2 r) which matches the time complexity of normal SVD and is faster than the usual nuclear+L_1-regularization methods that incur O(n^3) time complexity. This talk is based on joint works with Ambuj Tewari, Purushottam Kar, Praneeth Netrapalli, Animashree Anandkumar, U N Niranjan, and Sujay Sanghavi.\r\n[\/panel]\r\n\r\n[panel header=\"Submodular optimization and machine learning\"]Stefanie Jegelka<\/em>\r\n\r\nMany problems in machine learning that involve discrete structures or subset selection may be phrased in the language of submodular set functions. The property of submodularity, also referred to as a 'discrete analog of convexity', expresses the notion of diminishing marginal returns, and captures combinatorial versions of rank and dependence. Submodular functions occur in a variety of areas including graph theory, information theory, combinatorial optimization, stochastic processes and game theory. In machine learning, they emerge in different forms as the potential functions of graphical models, as the utility functions in active learning and sensing, in models of diversity, in structured sparse estimation or network inference. The lectures will give an introduction to the theory of submodular functions, some applications in machine learning and algorithms for minimizing and maximizing submodular functions that exploit ties to both convexity and concavity.\r\n[\/panel]\r\n\r\n[panel header=\"Lecture 1: Nonparametric graphical models\"]John Lafferty<\/em>\r\n\r\nWe present some nonparametric methods for graphical modeling. In the discrete case, where the data are binary or drawn from a finite alphabet, Markov random fields are the standard model. The Gaussian graphical model is the usual parametric model for continuous data, but it makes distributional assumptions that are often unrealistic. We discuss several approaches to building more flexible graphical models. One allows arbitrary graphs and a nonparametric extension of the Gaussian;the other uses kernel density estimation and restricts the graphs to trees and forests. Other approaches combine these two techniques.\r\n[\/panel]\r\n\r\n[panel header=\"Lecture 2: Computational tradeoffs in statistical estimation\"]John Lafferty<\/em>\r\n\r\nIn massive data analysis, statistical estimation needs to be carried out with close attention to computational resources -- compute cycles, communication bandwidth and storage capacity. Yet little is presently known about the fundamental tradeoffs between statistical and computational efficiency. We give a brief survey of past and more recent work in this direction. We then present new work that revisits classical linear and nonparametric estimation theory from a computational perspective, formulating an extension to classical results in minimax analysis in the setting of rate distortion theory. We also present algorithms for trading off estimation accuracy for computational speed in linear and nonparametric regression. \r\n[\/panel]\r\n\r\n[panel header=\"Lecture 3: High dimensional nonparametric estimation\"]John Lafferty<\/em>\r\n\r\nEstimating high dimensional functions under weak assumptions is a central challenge in statistical machine learning. We give a survey of results on variable selection for high dimensional nonparametric regression. We then present new results in nonparametric estimation under shape constraints. We first consider the problem of estimating a convex function of several variables, and develop a screening procedure to identify irrelevant variables. We then discuss extensions of these ideas and open problems for future research.\r\n[\/panel]\r\n\r\n[panel header=\"Scaling up Reinforcement Learning\"]B Ravindran<\/em>\r\n\r\nReinforcement Learning (RL) is a popular paradigm for trial-and-error learning enjoying renewed popularity due to Google Deepmind's Atari playing engine. Though there has been much interest in the field for close to 3 decades RL methods have not had many large scale deployments. In this talk I will introduce several approaches adopted by the RL community for scaling up algorithms. I will go over the fundamentals of hierarchical reinforcement learning and value function approximation methods. In the second part of the talk I will briefly cover methods for automatically discovering spatio-temporal abstractions in RL.\r\n[\/panel]\r\n\r\n[panel header=\"Distributed Machine Learning Algorithms: Communication-Computation Trade-offs\"]Sundararajan Sellamanickam<\/em>\r\n\r\nDistributed machine learning is an important area that has been receiving considerable attention from academic and industrial communities, as data is growing in unprecedented rate. In the first part of the talk, we review several popular approaches that are proposed\/used to learn classifier models in the big data scenario. With commodity clusters priced on system configurations becoming popular, machine learning algorithms have to be aware of the computation and communication costs involved in order to be cost effective and efficient. In the second part of the talk, we focus on methods that address this problem; in particular, considering different data distribution settings (e.g., example and feature partitions), we present efficient distributed learning algorithms that trade-off computation and communication costs. \r\n[\/panel]\r\n\r\n[panel header=\"I. Active learning and annotation (60 mins)\"]Sanjoy Dasgupta<\/em>\r\n\r\nThe \"active learning\" model is motivated by scenarios in which it is easy to amass vast quantities of unlabeled data (images and videos off the web, speech signals from microphone recordings, and so on) but costly to obtain their labels. Like supervised learning, the goal is ultimately to learn a classifier. But the labels of training points are hidden, and each of them can be revealed only at a cost. The idea is to query just a few labels that are especially informative about the decision boundary, and thereby to obtain an accurate classifier at significantly lower cost than regular supervised learning.\r\n\r\nThere are two distinct ways of conceptualizing active learning, which lead to rather different querying strategies. The first treats active learning as an efficient search through a hypothesis space of candidates, while the second has to do with exploiting cluster or neighborhood structure in data. This talk will show how each view leads to active learning algorithms that can be made efficient and practical, and have provable label complexity bounds that are in some cases exponentially lower than for supervised learning.\r\n[\/panel]\r\n\r\n[panel header=\"II. Information geometry (90 mins)\"]Sanjoy Dasgupta<\/em>\r\n\r\nThis tutorial will focus on entropy, exponential families, and information projection. We'll start by seeing the sense in which entropy is the only reasonable definition of randomness. We will then use entropy to motivate exponential families of distributions -- which include the ubiquitous Gaussian, Poisson, and Binomial distributions, but also very general graphical models. The task of fitting such a distribution to data is a convex optimization problem with a geometric interpretation as an \"information projection\": the projection of a prior distribution onto a linear subspace (defined by the data) so as to minimize a particular information-theoretic distance measure. This projection operation, which is more familiar in other guises, is a core optimization task in machine learning and statistics. We'll study the geometry of this problem and discuss algorithms for it.\r\n[\/panel]\r\n\r\n[panel header=\"III. Cluster trees and neighborhood graphs (60 mins)\"]Sanjoy Dasgupta<\/em>\r\n\r\nWhat information does the clustering of a finite data set reveal about the underlying distribution from which the data were sampled? This basic question has proved elusive even for the most widely-used clustering procedures. One natural criterion is to seek clusters that converge (as the data set grows) to regions of high density. When all possible density levels are considered, this is a hierarchical clustering problem where the sought limit is called the \"cluster tree\".\r\n\r\nThis talk will describe two simple algorithms for estimating this tree that implicitly construct a multiscale hierarchy of near-neighbor graphs on the data points. We'll show that these procedure are consistent, and give rates of convergence using a percolation argument that also gives insight into how neighborhood graphs should be constructed.\r\n[\/panel]\r\n\r\n[panel header=\"From LSI to Probabilistic Topic models: An introduction to Topic models\"]Chiranjib Bhattacharya<\/em>\r\n\r\nTopic models attempt to discover themes, or Topics, from large collection of documents. Discovering themes from a document corpus is an important problem with a variety of applications in Web-search, Corpus Browsing etc. \r\n\r\nIn this two part tutorial, we will begin by introducing neccessary background in understanding Topic models, mainly focussing on EM algorithm and Variational Inference. In the second part of the talk we will review several models starting with Latent Semantic Indexing(LSI), proposed in 1988, to the more recent and now state of the art Probabilistic Topic models. Towards the end of the talk we will discuss recent theoretical results on \\emph{provable} topic models.\r\n[\/panel]\r\n\r\n[panel header=\"Subset selection problems\"]Amit Deshpande<\/em>\r\n\r\nSubset selection problem means finding a small subset of given data that maximizes diversity or information content or some other submodular function depending on the context. This definition can be suitably modified in each context, and has a wide range of interesting applications like feature selection, sensor placement, document summarization, diversification of search. I\u2019ll review different theoretical attempts to capture this notion, the algorithmic ideas, practical applications, and their impact in return on basic research in graph theory, linear algebra, probability.\r\n[\/panel]\r\n\r\n[panel header=\"Entity Mining at Microsoft Bing Hyderabad\"]Manish Gupta<\/em>\r\n\r\nEntity mining is one of the hot topics in the area of web mining and information retrieval. In this talk, I will discuss in brief three interesting entity linking problems which apply various machine learning techniques: Entity linking, Dominant Entity Identification, and Cricket Linking. Entity linking is the problem of linking a mention phrase from a document to an entity in the knowledge base. Dominant entity identification is the problem of finding whether an entity e is the dominant entity for a page p. Cricket linking is the problem of linking event mentions from cricket match reports to a set of balls in cricket commentaries. In my talk, I will discuss these problems in detail, and will present interesting solutions to them.\r\n[\/panel]\r\n\r\n[panel header=\"TALK 1: Online Learning and Bandits\"]Aditya Gopalan<\/em>\r\n\r\nThe ability to make continual, accurate decisions based on evolving data is key in many of today\u2019s data-driven intelligent systems. This tutorial-style talk presents an introduction to the modern study of sequential learning and decision making under uncertainty. The broad objective is to cover modeling frameworks for online prediction and learning, explore algorithms for decision making, and gain an understanding of their performance. Specifically, we will look at multi-armed bandits -- models of decision making that capture the explore-vs-exploit tradeoff in learning, regret minimization, non-stochastic or adversarial online learning, and online convex optimization. Time permitting, we will discuss new directions and frontiers in the area of sequential decision making.\r\n[\/panel]\r\n\r\n[panel header=\"Talk 2: Online Learning in Complex Bandits & Markov Decision Processes\"]Aditya Gopalan<\/em>\r\n\r\nWe consider Reinforcement Learning (RL) in parameterized bandits or more generally Markov Decision Processes (MDPs), where the parameterization can induce correlation across transition probabilities and\/or rewards. Consequently, observing a particular state transition might yield useful information about other, unobserved, parts of the MDP. In this setting, Posterior sampling a.k.a. Thompson sampling - a randomized, Bayesian-inspired algorithm originally developed for the simpler multiarmed bandit - becomes a natural candidate for a learning algorithm. We develop a version of Thompson sampling for parameterized RL problems, and derive the first known frequentist regret bounds for fairly general parameter spaces and priors. Under mild conditions on the prior used in Thompson sampling, the regret can be shown to scale logarithmically in time and with high probability. The result holds for priors without any additional, specific closed-form structure such as conjugate or product-form priors. Moreover, the constant factor in the logarithmic scaling exposes the \"information complexity\" of learning the MDP, in terms of the structure of the parameter space. We also report numerical results for the algorithm on a parameterized queueing system, with a large number of states (queue occupancy) but only a small number of uncertain parameters (arrival\/service rates). \r\n\r\nJoint work with Shie Mannor and Yishay Mansour.\r\n[\/panel]\r\n\r\n[panel header=\"Non-convex Robust PCA\"]Praneeth Netrapalli<\/em>\r\n\r\nIn this lecture, we will illustrate a novel technique due to Erdos et al. (2011) which can be used to obtain bounds on eigenvector perturbation in the \\ell_{\\infty} norm. Standard techniques give us optimal bounds only for perturbation in the \\ell_2 norm. We will further use this technique to propose and analyze a new non-convex algorithm for robust PCA, where the task is to recover a low-rank matrix from sparse corruptions that are of unknown value and support. In the deterministic error setting, our method achieves exact recovery under the same conditions that are required by existing methods (which are based on convex optimization) but is much faster.\r\n[\/panel]\r\n\r\n[panel header=\"An Introduction to Concentration Inequalities and Statistical Learning Theory\"]Purushottam Kar<\/em>\r\n\r\nThe aim of this tutorial is to introduce tools and techniques that are used to analyze machine learning algorithms in statistical settings. Our focus will be on learning problems such as classification, regression, and ranking. We will look at concentration inequalities and other commonly used techniques such as uniform convergence and symmetrization, and use them to prove learning theoretic guarantees for algorithms in these settings.\r\n\r\nThe talk will be largely self-contained. However, it would help if the audience could brush up basic probability and statistics concepts such as random variables, events, probability of events, Boole\u2019s inequality etc. There are several good resources for these online and I do not wish to recommend one over the other. However, a couple of nice resources are given below\r\n\r\n1) https:\/\/www.khanacademy.org\/math\/probability\r\n\r\n2) http:\/\/ocw.mit.edu\/courses\/mathematics\/18-05-introduction-to-probability-and-statistics-spring-2014\/\r\n\r\n3) https:\/\/en.wikipedia.org\/wiki\/Boole's_inequality\r\n[\/panel]\r\n\r\n\r\n\r\n\r\n\r\n[\/accordion]"},{"id":2,"name":"Speakers","content":"
      Aditya Gopalan<\/a><\/div>\r\n
      Amit Deshpande<\/a><\/div>\r\n
      Chih-Jeh Lin<\/a><\/div>\r\n
      Chiranjib Bhattacharya<\/a><\/div>\r\n
      John Lafferty<\/a><\/div>\r\n
      Manish Gupta<\/a><\/div>\r\n
      Prateek Jain<\/a><\/div>\r\n
      Praneeth Netrapalli<\/a><\/div>\r\n
      Purushottam Kar<\/div>\r\n
      B Ravindran<\/a><\/div>\r\n
      Sanjoy Dasgupta<\/a><\/div>\r\n
      Sriram Rajamani<\/a><\/div>\r\n
      Stefanie Jegelka<\/a><\/div>\r\n
      Sundararajan Sellamanickam<\/div>\r\n
      Suvrit Sra<\/a><\/div>\r\n
      <\/div>"},{"id":3,"name":"Committee","content":"
      Aditya Gopalan<\/a>, Indian Institute of Science<\/div>\r\n
      <\/div>\r\n
      Prateek Jain<\/a>, Microsoft Research<\/div>\r\n
      <\/div>\r\n
      Manik Varma<\/a>, Microsoft Research<\/div>\r\n
      <\/div>\r\n
      Sundarajan Sellamanickam, Microsoft Research<\/div>"},{"id":4,"name":"Video","content":"[videos]"}],"msr_startdate":"2015-06-15","msr_enddate":"2015-06-26","msr_event_time":"","msr_location":"Indian Institute of Science, Bangalore India","msr_event_link":"","msr_event_recording_link":"","msr_startdate_formatted":"June 15, 2015","msr_register_text":"Watch now","msr_cta_link":"","msr_cta_text":"","msr_cta_bi_name":"","featured_image_thumbnail":null,"event_excerpt":"The MSR India Summer School series, held in collaboration with the Indian Institute of Science, consists of lectures in a chosen area by leading experts from around the world. The aim is to introduce students and researchers to important new areas and the latest results and to provide a forum for Indian and international researchers to interact. The 2015 Summer School will be held between June 15 - 26 at the Indian Institute of Science,…","msr_research_lab":[199562],"related-researchers":[],"msr_impact_theme":[],"related-academic-programs":[],"related-groups":[],"related-projects":[],"related-opportunities":[],"related-publications":[],"related-videos":[],"related-posts":[],"_links":{"self":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-event\/200054"}],"collection":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-event"}],"about":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/types\/msr-event"}],"version-history":[{"count":2,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-event\/200054\/revisions"}],"predecessor-version":[{"id":868179,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-event\/200054\/revisions\/868179"}],"wp:attachment":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media?parent=200054"}],"wp:term":[{"taxonomy":"msr-research-area","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/research-area?post=200054"},{"taxonomy":"msr-region","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-region?post=200054"},{"taxonomy":"msr-event-type","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-event-type?post=200054"},{"taxonomy":"msr-video-type","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-video-type?post=200054"},{"taxonomy":"msr-locale","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-locale?post=200054"},{"taxonomy":"msr-program-audience","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-program-audience?post=200054"},{"taxonomy":"msr-post-option","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-post-option?post=200054"},{"taxonomy":"msr-impact-theme","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-impact-theme?post=200054"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}