{"id":759166,"date":"2021-06-29T11:14:45","date_gmt":"2021-06-29T18:14:45","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/?post_type=msr-research-item&p=759166"},"modified":"2021-07-08T11:19:08","modified_gmt":"2021-07-08T18:19:08","slug":"directions-in-ml-structured-models-for-automated-machine-learning","status":"publish","type":"msr-video","link":"https:\/\/www.microsoft.com\/en-us\/research\/video\/directions-in-ml-structured-models-for-automated-machine-learning\/","title":{"rendered":"Directions in ML: Structured Models for Automated Machine Learning"},"content":{"rendered":"
Automated machine learning (AutoML) seeks algorithmic methods for finding the best machine learning pipeline and hyperparameters to fit a new dataset. The complexity of this problem is astounding: viewed as an optimization problem, it entails search over an exponentially large space, with discrete and continuous variables. An efficient solution requires a strong structural prior on the optimization landscape of this problem.<\/p>\n
In this talk, we survey some of the most powerful techniques for AutoML on tabular datasets. We will focus in particular on techniques for meta-learning: how to quickly learn good models on a new dataset given good models for a large collection of datasets. We will see that remarkably simple structural priors, such as the low-dimensional structure used by the AutoML method Oboe, produce state-of-the-art results. The success of these simple models suggests that AutoML may be simpler than was previously understood.<\/p>\n