Venue:<\/strong> Microsoft Research New England<\/a>
\nHorace Mann Conference Room
\nOne Memorial Drive
\nCambridge, MA 02142<\/p>\n","protected":false},"excerpt":{"rendered":"The fifth annual New England Machine Learning Day will be Friday, May 6, 2016, at Microsoft Research New England, One Memorial Drive, Cambridge, MA 02142. The event will bring together local academics and researchers in machine learning and its applications.<\/p>\n","protected":false},"featured_media":381722,"template":"","meta":{"msr-url-field":"","msr-podcast-episode":"","msrModifiedDate":"","msrModifiedDateEnabled":false,"ep_exclude_from_search":false,"_classifai_error":"","msr_startdate":"2016-05-06","msr_enddate":"2016-05-06","msr_location":"Cambridge, MA, USA","msr_expirationdate":"","msr_event_recording_link":"","msr_event_link":"","msr_event_link_redirect":false,"msr_event_time":"","msr_hide_region":false,"msr_private_event":true,"footnotes":""},"research-area":[13556],"msr-region":[197900],"msr-event-type":[197944],"msr-video-type":[],"msr-locale":[268875],"msr-program-audience":[],"msr-post-option":[],"msr-impact-theme":[],"class_list":["post-283277","msr-event","type-msr-event","status-publish","has-post-thumbnail","hentry","msr-research-area-artificial-intelligence","msr-region-north-america","msr-event-type-hosted-by-microsoft","msr-locale-en_us"],"msr_about":"Venue:<\/strong> Microsoft Research New England<\/a>\r\nHorace Mann Conference Room\r\nOne Memorial Drive\r\nCambridge, MA 02142","tab-content":[{"id":0,"name":"About","content":"The fifth annual New England Machine Learning Day will be Friday, May 6, 2016, at Microsoft Research New England, One Memorial Drive, Cambridge, MA 02142. The event will bring together local academics and researchers in machine learning and its applications. There will be a lively poster session during lunch.\r\n\r\nRelated events:\r\n\r\n - NEML 2020<\/a><\/li>\r\n \t
- NEML 2019<\/a><\/li>\r\n \t
- NEML 2018<\/a><\/li>\r\n \t
- NEML 2017<\/a><\/li>\r\n \t
- NEML 2016<\/a><\/li>\r\n \t
- NEML 2015<\/a><\/li>\r\n \t
- NEML 2014<\/a><\/li>\r\n \t
- NEML 2013<\/a><\/li>\r\n \t
- NEML 2012<\/a><\/li>\r\n<\/ul>"},{"id":1,"name":"Agenda","content":"9:50 - 10:00\r\nOpening remarks\r\n\r\n10:00 - 10:30, Bill Freeman, MIT \/ Google\r\nLearning to see by listening<\/em>\r\nChildren may learn about the world by pushing, banging, and manipulating things, watching and listening as materials make their distinctive sounds-- dirt makes a thud; ceramic makes a clink. These sounds reveal physical properties of the objects, as well as the force and motion of the physical interaction. We've explored a toy version of that learning-through-interaction by recording audio and video while we hit many things with a drumstick. We developed an algorithm the predict sounds from silent videos of the drumstick interactions. The algorithm uses a recurrent neural network to predict sound features from videos and then produces a waveform from these features with an example-based synthesis procedure. We demonstrate that the sounds generated by our model are realistic enough to fool participants in a \"real or fake\" psychophysical experiment, and that the task of predicting sounds allows our system to learn about material properties in the scene. Joint work with: Andrew Owens, Phillip Isola, Josh McDermott, Antonio Torralba, Edward H. Adelson http:\/\/arxiv.org\/abs\/1512.08512<\/a> to appear in CVPR 2016.\r\n\r\n10:35 - 11:05, Nicolo Fusi, Microsoft Research\r\nDissecting Genetic Signals Using Gaussian Processes<\/em>\r\n\r\n11:10 - 11:40, Stefanie Jegelka, MIT\r\nDeterminantal Point Processes in Machine Learning\u2014old and new ideas<\/em>\r\nMany machine learning problems are, at their core, subset selection problems. Probabilistic models and practical algorithms for such scenarios rely on having sufficiently accurate yet tractable distributions over discrete sets. As one such example, Determinantal Point Processes (DPPs) have gained popularity in machine learning as elegant probabilistic models of diversity. Yet, their wide applicability has been hindered by computationally expensive sampling algorithms. In this talk, I will outline \"old\" and new applications of DPPs, and ideas for faster sampling procedures. These procedures build on new insights for algorithms that compute bilinear inverse forms. Our results find applications beyond DPPs, such as, for example, submodular maximization for sensing. This is joint work with Chengtao Li, Suvrit Sra, Josip Djolonga and Andreas Krause.\r\n\r\n11:40 - 1:45\r\nLunch and posters\r\n\r\n1:45 - 2:15, Eugene Charniak, Brown\r\nSyntactic Parsing, a Machine Learning Success Story, (white-board talk).<\/em>\r\nSyntactic Parsing is one of the great success stories of modern, machine-learning based, natural-language processing. We briefly examine why it is useful in NLP, and how it has gone from non-functional to very accurate, in the last twenty years or so.\r\n\r\n2:20 - 2:50, Lorenzo Orecchia, Boston University\r\nSpectral Graph Algorithms Without Eigenvectors<\/em>\r\nClassical spectral algorithms for graph problems aim to extract information from the top k-eigenvectors with the goal of reducing the dimensionality of the problem on the way to detecting significant features, such as well-separated clusters or dense subsets. Similarly, classical spectral graph theory focuses on the relation between the top eigenvectors of graph matrices and combinatorial quantities of interest, such as conductance and size of the maximum independent set. For these reasons, eigenvectors are often the main object of study in these fields.\r\n\r\nHowever, eigenvectors are a inherently unstable object. For instance, the top eigenvector of a graph can change completely under very small modifications of the graph, e.g., removal of edges. This is particularly problematic for the large data application where the edges of the graph may be noisy and where we may not want to compute a large number of eigenvectors.\r\n\r\nIn this talk, I will survey how the eigenvector problem can be regularized to construct a convex optimization problem whose optimal solution approximates the eigenvector, while changing smoothly as the instance matrix is modified.\r\n\r\nBesides being more robust objects if the instance graph is noisy, these \u201cregularized eigenvectors\u201d can be used to speed up a number of fundamental spectral algorithms, e.g. to compute balanced partitions of a graph or to sparsify a matrix. At the same time, the smoothness of these object also allows us to simplify many classical proofs of results in spectral graph theory.\r\n\r\n2:50 - 3:20\r\nCoffee break\r\n\r\n3:20 - 3:50, Brendan O\u2019Connor, University of Massachusetts Amherst\r\nMeasuring social phenomena in news and social media<\/em>\r\nWhat can text analysis tell us about society? Corpora of news, social media, and historical documents record events, beliefs, and culture. Natural language processing and machine learning methods hold great promise to better explore this type of data. At the same time, our current NLP methods are confounded with social variables: I'll preview ongoing work assessing NLP technology for social media messages, which shows disparate effectiveness for texts authored by different demographic groups. This is not surprising given what we know about sociolinguistics, but these phenomena may be less well known to technical practitioners. As the scope of available textual data expands to creative and non-standard language from a wide variety of social groups, we encounter crucial modeling and data collection challenges to ensure effective and equitable language technologies.\r\n\r\n3:55 - 4:25, Guy Bresler, MIT\r\nLearning tree-structured Ising models in order to make predictions<\/em>\r\n\r\n4:30 - 5:00, Finale Doshi-Velez, Harvard\r\nCharacterizing Non-Identifiability in Non-negative Matrix Factorization<\/em>\r\nNonnegative matrix factorization (NMF) is a popular dimension reduction technique that produces interpretable decomposition of the data into parts. However, this decomposition is often not identifiable, even beyond simple cases of permutation and scaling. Non-identifiability is an important concern in practical data exploration settings, in which the basis of the NMF factorization may be interpreted as having some kind of meaning: it may be important to know that other non-negative characterizations of the data were also possible. While other studies have provide criteria under which NMF is unique, in this talk I'll discuss when and how an NMF might *not* be unique. Then I'll discuss some algorithms that leverage these insights to find alternate solutions."},{"id":2,"name":"Committee","content":"Tamara Broderick, MIT\r\nAdam Tauman Kalai<\/a>, Microsoft Research\r\nAlexander Rush, Harvard\r\nVenkatesh Saligrama, Boston University\r\n\r\nThe steering committee that selects the organizers of ML Day each year consists of Ryan Adams, Adam Tauman Kalai, and Joshua Tenenbaum."}],"msr_startdate":"2016-05-06","msr_enddate":"2016-05-06","msr_event_time":"","msr_location":"Cambridge, MA, USA","msr_event_link":"","msr_event_recording_link":"","msr_startdate_formatted":"May 6, 2016","msr_register_text":"Watch now","msr_cta_link":"","msr_cta_text":"","msr_cta_bi_name":"","featured_image_thumbnail":"","event_excerpt":"The fifth annual New England Machine Learning Day will be Friday, May 6, 2016, at Microsoft Research New England, One Memorial Drive, Cambridge, MA 02142. The event will bring together local academics and researchers in machine learning and its applications.","msr_research_lab":[199563],"related-researchers":[],"msr_impact_theme":[],"related-academic-programs":[],"related-groups":[],"related-projects":[],"related-opportunities":[],"related-publications":[],"related-videos":[],"related-posts":[],"_links":{"self":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-event\/283277"}],"collection":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-event"}],"about":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/types\/msr-event"}],"version-history":[{"count":2,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-event\/283277\/revisions"}],"predecessor-version":[{"id":565527,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-event\/283277\/revisions\/565527"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media\/381722"}],"wp:attachment":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media?parent=283277"}],"wp:term":[{"taxonomy":"msr-research-area","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/research-area?post=283277"},{"taxonomy":"msr-region","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-region?post=283277"},{"taxonomy":"msr-event-type","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-event-type?post=283277"},{"taxonomy":"msr-video-type","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-video-type?post=283277"},{"taxonomy":"msr-locale","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-locale?post=283277"},{"taxonomy":"msr-program-audience","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-program-audience?post=283277"},{"taxonomy":"msr-post-option","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-post-option?post=283277"},{"taxonomy":"msr-impact-theme","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-impact-theme?post=283277"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}