Loading...
Please wait, while we are loading the content...
Similar Documents
Programming Research Group a Learnability Model for Universal Representations a Learnability Model for Universal Representations
| Content Provider | Semantic Scholar |
|---|---|
| Author | Page, David |
| Copyright Year | 1997 |
| Abstract | This paper deenes a new computational model of inductive learning, called U-learnability (Universal Learnability), that is well-suited for rich representation languages, most notably for universal (Turing equivalent) representations. It is motivated by three observations. Firstly, existing computational models of inductive learning|the best known of which are identiication in the limit and PAC-learnability|either fail to provide guarantees of accuracy and computational eeciency or require severely restricted representation languages. Secondly, practical machine learning programs use rich representations yet often perform eeciently and achieve high degrees of accuracy. Thirdly, these machine learning programs often make assumptions about the underlying probability distribution over target concepts; such an assumption provides a soft bias for learning. This contrasts with existing computational models of inductive learning, which incorporate only language biases, or hard biases. U-learnability incorporates soft biases in the form of a Bayesian prior distribution over target concepts. (Other authors have argued for the use of Bayesian prior distributions in a manner more restricted than their use in U-learnability, though this has not lead to a new model of polynomial-time learnability.) In addition, due to problems of undecidability with universal representations, U-learnability makes special provision for the use of time-bounded concepts. A time bound takes the form of a polynomial function of the size of an example. |
| File Format | PDF HTM / HTML |
| Language | English |
| Access Restriction | Open |
| Content Type | Text |
| Resource Type | Article |