{"id":284153,"date":"2016-02-18T16:06:58","date_gmt":"2016-02-19T00:06:58","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/?post_type=msr-research-item&p=284153"},"modified":"2016-08-26T16:08:40","modified_gmt":"2016-08-26T23:08:40","slug":"online-learning-and-optimization-from-continuous-to-discrete-time","status":"publish","type":"msr-video","link":"https:\/\/www.microsoft.com\/en-us\/research\/video\/online-learning-and-optimization-from-continuous-to-discrete-time\/","title":{"rendered":"Online Learning and Optimization from Continuous to Discrete Time"},"content":{"rendered":"

Many discrete algorithms for convex optimization and online learning can be interpreted as a discretization of a continuous-time process. Perhaps the simplest and oldest example is gradient descent, which is the discretization of the ODE $\\dot X = -\\nabla f(X)$. Studying the continuous-time process offers many advantages: the analysis is often simple and elegant, it provides insights into the discrete process, and can help streamline the design of algorithms (by performing the design in the continuous domain then discretizing). In this talk, I will present two such examples: In the first, I will show how some (stochastic) online learning algorithms can be obtained by discretizing an ODE on the simplex, known as the replicator dynamics. I will review properties of the ODE, then give sufficient conditions for convergence of the discrete process, by relating it to the solution trajectories of the ODE. In the second example, I will show how we can design an ODE for accelerated first-order optimization of smooth convex functions. The continuous-time design relies on an “inverse” Lyapunov argument: we start from an energy function which encodes the constraints of the problem and the desired convergence rate, then design dynamics tailored to that energy function. Then, by carefully discretizing the ODE, we obtain a family of accelerated algorithms with optimal rate of convergence.<\/p>\n","protected":false},"excerpt":{"rendered":"

Many discrete algorithms for convex optimization and online learning can be interpreted as a discretization of a continuous-time process. Perhaps the simplest and oldest example is gradient descent, which is the discretization of the ODE $\\dot X = -\\nabla f(X)$. Studying the continuous-time process offers many advantages: the analysis is often simple and elegant, it […]<\/p>\n","protected":false},"featured_media":275694,"template":"","meta":{"msr-url-field":"","msr-podcast-episode":"","msrModifiedDate":"","msrModifiedDateEnabled":false,"ep_exclude_from_search":false,"_classifai_error":"","footnotes":""},"research-area":[13561],"msr-video-type":[],"msr-locale":[268875],"msr-post-option":[],"msr-impact-theme":[],"msr-pillar":[],"class_list":["post-284153","msr-video","type-msr-video","status-publish","has-post-thumbnail","hentry","msr-research-area-algorithms","msr-locale-en_us"],"msr_download_urls":"","msr_external_url":"https:\/\/youtu.be\/KPlG644z2C4","msr_secondary_video_url":"","msr_video_file":"","_links":{"self":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-video\/284153"}],"collection":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-video"}],"about":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/types\/msr-video"}],"version-history":[{"count":0,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-video\/284153\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media\/275694"}],"wp:attachment":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media?parent=284153"}],"wp:term":[{"taxonomy":"msr-research-area","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/research-area?post=284153"},{"taxonomy":"msr-video-type","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-video-type?post=284153"},{"taxonomy":"msr-locale","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-locale?post=284153"},{"taxonomy":"msr-post-option","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-post-option?post=284153"},{"taxonomy":"msr-impact-theme","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-impact-theme?post=284153"},{"taxonomy":"msr-pillar","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-pillar?post=284153"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}