Support Vector Machines for Structured Outputs

Over the last decade, much of the research on discriminative learning has focused on problems like classification and regression, where the prediction is a single univariate variable. But what if we need to predict complex objects like trees, vectors, or orderings? Such problems arise, for example, when a natural language parser needs to predict the correct parse tree for a given sentence, when one needs to optimize a multivariate performance measure like the F1-score, or when a search engine needs to predict which ranking is best for a given query.

This talk will discuss a support vector approach to predicting complex objects. It generalizes the idea of margins to complex prediction problems and a large range of loss functions. While the resulting training problems have exponential size, there is a simple algorithm that allows training in polynomial time. Empirical results will be given for several examples.

发言人详细信息

Thorsten Joachims is an Assistant Professor in the Department of Computer Science at Cornell University. In 2001, he finished his dissertation with the title “The Maximum-Margin Approach to Learning Text Classifiers: Methods, Theory, and Algorithms”, advised by Prof. Katharina Morik at the University of Dortmund. From there he also received his Diplom in Computer Science in 1997 with a thesis on WebWatcher, a browsing assistant for the Web. His research interests center on a synthesis of theory and system building in the field of machine learning, with a focus on Support Vector Machines and machine learning with text. He authors the SVM-Light algorithm and software for support vector learning. From 1994 to 1996 he was a visiting scientist at Carnegie Mellon University with Prof. Tom Mitchell.

日期:
演讲者:
Thorsten Joachims
所属机构:
Cornell University