Loading...
Please wait, while we are loading the content...
Similar Documents
Fast Transpose Methods for Kernel Learning on Sparse Data
| Content Provider | CiteSeerX |
|---|---|
| Author | Haffner, Patrick |
| Abstract | Kernel-based learning algorithms, such as Support Vector Machines (SVMs) or Perceptron, often rely on sequential optimization where a few examples are added at each iteration. Updating the kernel matrix usually requires matrix-vector multiplications. We propose a new method based on transposition to speedup this computation on sparse data. Instead of dot-products over sparse feature vectors, our computation incrementally merges lists of training examples and minimizes access to the data. Caching and shrinking are also optimized for sparsity. On very large natural language tasks (tagging, translation, text classification) with sparse feature representations, a 20 to 80-fold speedup over LIBSVM is observed using the same SMO algorithm. Theory and experiments explain what type of sparsity structure is needed for this approach to work, and why its adaptation to Maxent sequential optimization is inefficient. |
| File Format | |
| Access Restriction | Open |
| Subject Keyword | Sparse Data Kernel Learning Fast Transpose Method 80-fold Speedup Support Vector Machine Sparsity Structure New Method Sparse Feature Representation Matrix-vector Multiplication Maxent Sequential Optimization Sparse Feature Vector Sequential Optimization Large Natural Language Task Smo Algorithm Text Classification Kernel-based Learning Algorithm Training Example |
| Content Type | Text |