{"id":171268,"date":"2014-01-28T11:40:38","date_gmt":"2014-01-28T11:40:38","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/project\/learning-theory\/"},"modified":"2017-06-19T11:53:07","modified_gmt":"2017-06-19T18:53:07","slug":"learning-theory","status":"publish","type":"msr-project","link":"https:\/\/www.microsoft.com\/en-us\/research\/project\/learning-theory\/","title":{"rendered":"Learning Theory"},"content":{"rendered":"

We work on questions motivated by machine learning, in particular from the theoretical and computational perspectives. Our goals are to mathematically understand the effectiveness of existing learning algorithms and to design new learning algorithms. We combine expertise from diverse fields such as algorithms and complexity, statistics, and convex geometry.<\/p>\n

We are interested in a broad range of problems.\u00a0For example, we have studied the following problems.<\/p>\n

\n
    \n
  1. Can we rigorously prove the effectiveness of practical algorithms?<\/strong> For neural networks, we show that under suitable conditions, the classic gradient descent (combined with local perturbation) algorithm\u00a0can effectively\u00a0learn low degree polynomials. For\u00a0deep networks, we show that there is a natural\u00a0algorithm that grows networks in a bottom up fashion for certain distributions.<\/li>\n
  2. Can we\u00a0learn more effectively by\u00a0leveraging the structure of the target functions?<\/strong>\u00a0We have designed\u00a0regression algorithms to learn low degree sparse polynomials with complexity dependent on the sparsity, nearly optimal high-dimensional linear regression for any design matrix by exploring the geometry of the design, and regression based algorithms for learning convex polytopes using new limit theorems in probability.<\/li>\n
  3. Can we learn while providing privacy for our sample points?<\/strong> We show a close\u00a0connection between differential privacy and robust statistics, and we show how to modify boosting to ensure differential privacy.<\/li>\n
  4. What is the complexity of fundamental machine learning tasks?<\/strong> We attempt to shed light on the boundary between tractable and intractable problems using tools from computational complexity and cryptography.<\/li>\n<\/ol>\n

    While our investigation is starting up, machine learning in Silicon Valley Center has deep roots. Much work has been done on applying machine learning in real systems such as performance diagnosis in distribute systems and motion analysis in Kinect<\/a>; on building distributed systems, such as\u00a0DryadLinq<\/a> and Dandelion<\/a>, that support large scale machine learning; on developing algorithms, such as locality sensitive hashing and spectral partitioning, that are widely useful for machine learning tasks.\u00a0Much work\u00a0has been done in the\u00a0complementary area of differential privacy<\/a> too.\u00a0By collaboration with colleagues, we hope to produce work that is both insightful and bears fruit.<\/p>\n<\/div>\n","protected":false},"excerpt":{"rendered":"

    We work on questions motivated by machine learning, in particular from the theoretical and computational perspectives. Our goals are to mathematically understand the effectiveness of existing learning algorithms and to design new learning algorithms. We combine expertise from diverse fields such as algorithms and complexity, statistics, and convex geometry. We are interested in a broad […]<\/p>\n","protected":false},"featured_media":0,"template":"","meta":{"msr-url-field":"","msr-podcast-episode":"","msrModifiedDate":"","msrModifiedDateEnabled":false,"ep_exclude_from_search":false,"footnotes":""},"research-area":[13561,13556],"msr-locale":[268875],"msr-impact-theme":[],"msr-pillar":[],"class_list":["post-171268","msr-project","type-msr-project","status-publish","hentry","msr-research-area-algorithms","msr-research-area-artificial-intelligence","msr-locale-en_us","msr-archive-status-active"],"msr_project_start":"2014-01-28","related-publications":[162413,163159,163215,165842,155984,157309,162220],"related-downloads":[],"related-videos":[],"related-groups":[],"related-events":[],"related-opportunities":[],"related-posts":[],"related-articles":[],"tab-content":[],"slides":[],"related-researchers":[],"msr_research_lab":[],"msr_impact_theme":[],"_links":{"self":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-project\/171268"}],"collection":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-project"}],"about":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/types\/msr-project"}],"version-history":[{"count":0,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-project\/171268\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media?parent=171268"}],"wp:term":[{"taxonomy":"msr-research-area","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/research-area?post=171268"},{"taxonomy":"msr-locale","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-locale?post=171268"},{"taxonomy":"msr-impact-theme","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-impact-theme?post=171268"},{"taxonomy":"msr-pillar","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-pillar?post=171268"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}