{"id":392777,"date":"2017-07-06T09:30:53","date_gmt":"2017-07-06T16:30:53","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/?post_type=msr-project&p=392777"},"modified":"2018-12-04T14:12:39","modified_gmt":"2018-12-04T22:12:39","slug":"foundations-of-optimization","status":"publish","type":"msr-project","link":"https:\/\/www.microsoft.com\/en-us\/research\/project\/foundations-of-optimization\/","title":{"rendered":"Foundations of Optimization"},"content":{"rendered":"

Optimization methods are the engine of machine learning algorithms. Examples abound, such as training neural networks with stochastic gradient descent, segmenting images with submodular optimization, or efficiently searching a game tree with bandit algorithms. We aim to advance the mathematical foundations of both discrete and continuous optimization and to leverage these advances to develop new algorithms with a broad set of AI applications.<\/p>\n

Some of the current directions pursued by our members include convex optimization, distributed optimization, online optimization, non-convex optimization, and discrete optimization for random problems:<\/p>\n