Loading...
Please wait, while we are loading the content...
Similar Documents
SVM Learning for Interdependent and Structured Output Spaces
| Content Provider | Semantic Scholar |
|---|---|
| Author | Altun, Yasemin |
| Abstract | Supervised learning, one of the most important areas of machine learning, is the general problem of learning a function that predicts the best value for a response variable y for an observation x by making use of a sample of input-output pairs. Traditionally, in classification, the values that y can take are simple, in the sense that they can be characterized by an arbitrary identifier. However, in many real-world applications the outputs are often complex, in that either there are dependencies between classes (eg. taxonomies used for example in document classification), or the classes are objects that have some internal structure such that they describe a configuration over interdependent components (eg. sequences, parse trees). For such problems, which are commonly called structured output prediction problems, standard multiclass approaches render ineffective, since the size of the output space is very large (eg. the set of label sequences scale exponentially with the length of the input sequence). More importantly, it is crucial to capture the common properties that are shared by the set of classes in order to generalize across classes as well as to generalize across input patterns. In this paper, we approach the structured output prediction problems by generalizing a multiclass Support Vector Machine formulation by Crammer and Singer (2001) to the broad problem of learning for interdependent and structured outputs. |
| File Format | PDF HTM / HTML |
| Alternate Webpage(s) | http://ttic.uchicago.edu/~altun/pubs/AltHofTso06.pdf |
| Language | English |
| Access Restriction | Open |
| Content Type | Text |
| Resource Type | Article |