Skip to main content

And the award goes to...



Corinna Cortes, Head of Google Research in New York, has just been awarded the ACM Paris Kanellakis Theory and Practice Award jointly with Vladimir Vapnik (Royal Holloway College and NEC Research). The award recognizes their invention in the early 1990s of the soft-margin support vector machine, which has become the supervised machine learning method of choice for applications ranging from image analysis to document classification to bioinformatics.

What is so important about this invention? In supervised machine learning, we create algorithms that can learn a rule to accurately classify new examples based on a set of training examples (e.g. spam or non-spam). There is no single attribute of an email message that tells us with certainty that it is spam. Instead, many attributes have to be considered, forming a vector of very high dimension. The same situation arises in many other machine practical learning tasks, including many that we work on at Google.

To learn accurate classifiers, we need to solve several big problems. First, the rule learned from the training data should be accurate on new test examples, even though it has not seen those examples. In other words, the rule must generalize well. Second, we must be able to find the optimal rule efficiently. Both of these problems are especially daunting for very high dimensional data. Third, the method for computing the rule should be able to accommodate errors in the training data, such as messages that are given conflicting labels by different people (my spam may be your ham).

Soft-margin support vector machines wrap these three problems together into an elegant mathematical package. The crucial insight is that classification problems of this kind can be expressed as finding in very high dimension (or even infinite dimension) the hyperplane that best separates the positive examples (ham) from the negative ones (spam).

Remarkably, the solution of this problem does not depend on the dimensionality of the data, it depends only on the pairwise similarities between the training examples determined by the agreement or disagreement between corresponding attributes. Furthermore, a hyperplane that separates the training data well can be shown to generalize well to unseen data with the same statistical properties.

Now, you might be asking how could this be done if the training data is inconsistently labeled. After all, you cannot have the same example on both sides of the separating hyperplane. That's where the soft margin idea comes in: the quadratic optimization program that finds the optimal separating hyperplane can be cleverly modified to "give up" on a fraction of the training examples that cannot be classified correctly.

With this crucial improvement, support vector machines became really practical, while the core ideas have had huge influence in the development of further learning algorithms for an ever wider range of tasks.

Congratulations to Corinna (and Vladimir) on the well-deserved award.
Twitter Facebook