{"id":169805,"date":"2001-11-05T12:17:42","date_gmt":"2001-11-05T12:17:42","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/project\/support-vector-machines\/"},"modified":"2019-08-14T14:33:07","modified_gmt":"2019-08-14T21:33:07","slug":"support-vector-machines","status":"publish","type":"msr-project","link":"https:\/\/www.microsoft.com\/en-us\/research\/project\/support-vector-machines\/","title":{"rendered":"Support Vector Machines"},"content":{"rendered":"

Support vector machines are a set of algorithms that learn from data by creating models that maximize their margin of error.<\/p>\n

Support vector machines<\/a>\u00a0(SVMs) are a family of algorithms for\u00a0classification<\/a>,\u00a0regression<\/a>,\u00a0transduction<\/a>, novelty detection<\/a>, and\u00a0semi-supervised learning<\/a>. They work by choosing a model that\u00a0maximizes the error margin<\/a> of a training set.<\/p>\n

SVMs\u00a0were originally developed by\u00a0Vladimir Vapnik<\/a> in 1963. Since the mid-90s, a energetic research community has grown around them. If you want to learn more about SVMs, you can read Chris Burges’ tutorial<\/a>.\u00a0Nello Cristianini<\/a> and\u00a0John Shawe-Taylor<\/a> have written\u00a0a textbook<\/a> about them.\u00a0Bernhard Sch\u00f6lkopf<\/a> and\u00a0Alex Smola<\/a> wrote a textbook about kernel methods<\/a>, which are a closely-related set of methods.<\/p>\n

Since 1998, we’ve done basic research into making SVMs be more user-friendly. Our research has resulted in:<\/p>\n