{"id":452692,"date":"2020-02-19T06:47:10","date_gmt":"2020-02-18T14:46:19","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/?post_type=msr-event&p=452692"},"modified":"2020-02-19T06:47:10","modified_gmt":"2020-02-19T14:47:10","slug":"ai-pizza","status":"publish","type":"msr-event","link":"https:\/\/www.microsoft.com\/en-us\/research\/event\/ai-pizza\/","title":{"rendered":"Machine Learning & Pizza"},"content":{"rendered":"

21 Station Road
\nCambridge
\nCB1 2FB<\/p>\n

Organisers:<\/strong><\/p>\n

Cheng Zhang<\/p>\n

Dimitra Newman<\/p>\n

View the whole series on talks.cam (opens in new tab)<\/span><\/a><\/p>\n

 <\/p>\n","protected":false},"excerpt":{"rendered":"

21 Station Road Cambridge CB1 2FB Organisers: Cheng Zhang Dimitra Newman View the whole series on talks.cam  <\/p>\n","protected":false},"featured_media":0,"template":"","meta":{"msr-url-field":"","msr-podcast-episode":"","msrModifiedDate":"","msrModifiedDateEnabled":false,"ep_exclude_from_search":false,"msr_startdate":"2020-02-28","msr_enddate":"2020-02-28","msr_location":"Microsoft Research Cambridge","msr_expirationdate":"","msr_event_recording_link":"","msr_event_link":"","msr_event_link_redirect":false,"msr_event_time":"17:30-19:00","msr_hide_region":true,"msr_private_event":false,"footnotes":""},"research-area":[13556],"msr-region":[239178],"msr-event-type":[197944],"msr-video-type":[],"msr-locale":[268875],"msr-program-audience":[],"msr-post-option":[],"msr-impact-theme":[],"class_list":["post-452692","msr-event","type-msr-event","status-publish","hentry","msr-research-area-artificial-intelligence","msr-region-europe","msr-event-type-hosted-by-microsoft","msr-locale-en_us"],"msr_about":"21 Station Road\r\nCambridge\r\nCB1 2FB\r\n\r\nOrganisers:<\/strong>\r\n\r\nCheng Zhang\r\n\r\nDimitra Newman\r\n\r\nView the whole series on talks.cam<\/a>\r\n\r\n ","tab-content":[{"id":0,"name":"About","content":"Get your slice of the latest artificial intelligence research in Cambridge!\r\nStarting in January 2020 we re-launch the AI+Pizza talk series as a fresh way to discover some of the amazing research on machine learning and artificial intelligence.\r\nEach event packs two talks of 15 minutes each as well as an additional poster presentation session afterward, featuring hot-off-the-press artificial intelligence research out of Cambridge, such as recent papers from venues like NIPS, ICLR, ICML, AISTATS, and UAI.\r\n\r\nThe event is open to the public, welcoming all students and researchers in the area of machine learning and artificial intelligence.\r\nFollowing the talks, free pizza and drinks will be served."},{"id":1,"name":"Speakers","content":"Feb 28, 2020:\r\n\r\nSpeaker:<\/strong> Eric Nalisnick (University of Cambridge)\r\n\r\nTitle:<\/strong> Dropout as a Structured Shrinkage Prior\r\n\r\nAbstract:<\/strong> Dropout regularization of deep neural networks has been a mysterious yet effective tool to prevent overfitting. Explanations for its success range from the prevention of \"co-adapted\" weights to it being a form of cheap Bayesian inference. We propose a novel framework for understanding multiplicative noise in neural networks, considering continuous distributions as well as Bernoulli noise (i.e. dropout). We show that multiplicative noise induces structured shrinkage priors on a network\u2019s weights. We derive the equivalence through reparametrization properties of scale mixtures and without invoking any approximations.\u00a0 We leverage these insights to propose a novel shrinkage framework for resnets, terming the prior \"automatic depth determination\" as it is the natural analog of \"automatic relevance determination\" for network depth.\r\n\r\nSpeaker:<\/strong> Andreas Damianou (Amazon)\r\n\r\nTitle:<\/strong> Gaussian processes with neural network inductive biases for fast domain adaptation\r\n\r\nAbstract:<\/strong> Recent advances in learning algorithms for deep neural networks allows us to train such models efficiently to obtain rich feature representations.\u00a0 However, when it comes to transfer learning, local optima in the high-dimensional parameter space still pose a severe problem. Motivated by this issue, we propose a framework for performing probabilistic transfer learning in the function space while, at the same time, leveraging the rich representations offered by deep neural networks. Our approach consists of linearizing neural networks to produce a Gaussian process model with covariance function given by the network's Jacobian matrix. The result is a closed-form probabilistic model which allows fast domain adaptation with accompanying uncertainty estimation.\r\n\r\n \r\n\r\nJan 31, 2020:\r\n\r\nSpeaker:<\/strong> Marc Brockschmidt (MSR Cambridge)\r\n\r\nTitle:<\/strong> Editing Sequences by Copying Spans\r\n\r\nAbstract:<\/strong> Neural sequence-to-sequence models are finding increasing use in editing of documents, for example in correcting a text document or repairing source code. In this paper, we argue that existing seq2seq models (with a facility to copy single tokens) are not a natural fit for such tasks, as they have to explicitly copy each unchanged token. We present an extension of seq2seq models capable of copying entire spans of the input to the output in one step, greatly reducing the number of decisions required during inference. This extension means that there are now many ways of generating the same output, which we handle by deriving a new objective for training and a variation of beam search for inference that explicitly handle this problem.\r\n\r\n \r\n\r\nSpeaker:<\/strong> Vincent Dutordoir (Prowler)\r\n\r\nTitle:<\/strong> Bayesian Image Classification with Deep Convolutional Gaussian Processes\r\n\r\nAbstract:<\/strong> There is a lot of focus on Bayesian deep learning at the moment, with many researchers tackling this problem by building on top of neural networks and making the inference look more Bayesian. In this talk, I'm going to follow a different strategy and use a Gaussian process, which is a well-understood probabilistic method with many attractive properties, as a primitive building block to construct fully Bayesian deep learning models. We show that the accuracy of these Bayesian methods, and the quality of their posterior uncertainties, depend strongly on the suitability of the modelling assumptions made in the prior, and that Bayesian inference by itself is often not enough. This motivates the development of a novel convolutional kernel, which leads to improved uncertainty and accuracy on a range of different problems.\r\n\r\n "}],"msr_startdate":"2020-02-28","msr_enddate":"2020-02-28","msr_event_time":"17:30-19:00","msr_location":"Microsoft Research Cambridge","msr_event_link":"","msr_event_recording_link":"","msr_startdate_formatted":"February 28, 2020","msr_register_text":"Watch now","msr_cta_link":"","msr_cta_text":"","msr_cta_bi_name":"","featured_image_thumbnail":null,"event_excerpt":"Get your slice of the latest artificial intelligence research in Cambridge! Starting in January 2020 we re-launch the AI+Pizza talk series as a fresh way to discover some of the amazing research on machine learning and artificial intelligence. Each event packs two talks of 15 minutes each as well as an additional poster presentation session afterward, featuring hot-off-the-press artificial intelligence research out of Cambridge, such as recent papers from venues like NIPS, ICLR, ICML, AISTATS,…","msr_research_lab":[199561],"related-researchers":[],"msr_impact_theme":[],"related-academic-programs":[],"related-groups":[],"related-projects":[],"related-opportunities":[],"related-publications":[],"related-videos":[],"related-posts":[],"_links":{"self":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-event\/452692"}],"collection":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-event"}],"about":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/types\/msr-event"}],"version-history":[{"count":9,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-event\/452692\/revisions"}],"predecessor-version":[{"id":633333,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-event\/452692\/revisions\/633333"}],"wp:attachment":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media?parent=452692"}],"wp:term":[{"taxonomy":"msr-research-area","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/research-area?post=452692"},{"taxonomy":"msr-region","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-region?post=452692"},{"taxonomy":"msr-event-type","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-event-type?post=452692"},{"taxonomy":"msr-video-type","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-video-type?post=452692"},{"taxonomy":"msr-locale","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-locale?post=452692"},{"taxonomy":"msr-program-audience","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-program-audience?post=452692"},{"taxonomy":"msr-post-option","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-post-option?post=452692"},{"taxonomy":"msr-impact-theme","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-impact-theme?post=452692"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}