Loading...
Please wait, while we are loading the content...
Similar Documents
Making the Most Of On-line Learning: An Introduction to Learning on the Internet.
| Content Provider | Semantic Scholar |
|---|---|
| Author | Somolu, Oreoluwa Kaser, Joyce Hanson, Katherine |
| Copyright Year | 2004 |
| Abstract | We study on{line learning of a linearly separable rule with a simple perceptron. Training utilizes a sequence of uncorrelated, randomly drawn N {dimensional input examples. In the thermodynamic limit the generalization error after training with P such examples can be calculated exactly. For the standard perceptron algorithm it decreases like (N=P) 1=3 for large P=N , in contrast to the faster (N=P) 1=2 {behavior of the so{called Heb-bian learning. Furthermore, we show that a speciic parameter{free on{line scheme, the AdaTron algorithm, gives an asymptotic (N=P){decay of the generalization error. This coincides (up to a constant factor) with the bound for any training process based on random examples, including oo{ line learning. Simulations connrm our results. A very important feature of Feedforward Neural Networks is their ability to learn a rule from examples 1, 2]. Methods known from Statistical Mechanics have been successfully used to study this property 3{5], mainly for the so{called simple perceptron 6, 7]. Usually, learning is interpreted as an optimization process in the space of network parameters or weights. The corresponding cost function measures the performance of the trained network (the student) on a given set of examples. This is often termed oo{line learning 1]. In the following, however, we study on{line learning of a linearly separable rule with a simple perceptron. Here, |
| File Format | PDF HTM / HTML |
| Alternate Webpage(s) | http://www2.edc.org/GDI/publications_SR/making_FULLBOOK.pdf |
| Alternate Webpage(s) | http://www.edc.org/GDI/publications_SR/making_FULLBOOK.pdf |
| Language | English |
| Access Restriction | Open |
| Content Type | Text |
| Resource Type | Article |