Loading...
Please wait, while we are loading the content...
Learning with marginalized corrupted features (2013)
| Content Provider | CiteSeerX |
|---|---|
| Author | Chen, Minmin Tyree, Stephen Weinberger, Kilian Q. |
| Description | In Proceedings of the 30th International Conference on Machine Learning (ICML-13 |
| Abstract | The goal of machine learning is to develop predictors that generalize well to test data. Ideally, this is achieved by training on very large (infinite) training data sets that cap-ture all variations in the data distribution. In the case of finite training data, an effec-tive solution is to extend the training set with artificially created examples—which, how-ever, is also computationally costly. We pro-pose to corrupt training examples with noise from known distributions within the expo-nential family and present a novel learning algorithm, called marginalized corrupted fea-tures (MCF), that trains robust predictors by minimizing the expected value of the loss function under the corrupting distribution— essentially learning with infinitely many (cor-rupted) training examples. We show empiri-cally on a variety of data sets that MCF clas-sifiers can be trained efficiently, may general-ize substantially better to test data, and are more robust to feature deletion at test time. 1. |
| File Format | |
| Publisher Date | 2013-01-01 |
| Access Restriction | Open |
| Content Type | Text |
| Resource Type | Proceeding Conference Proceedings |