Loading...
Please wait, while we are loading the content...
Convergence of an online gradient algorithm with penalty for two-layer neural networks
| Content Provider | Semantic Scholar |
|---|---|
| Author | Shao, Hongmei Liu, Lijun |
| Copyright Year | 2006 |
| Abstract | Online gradient algorithm has been widely used as a learning algorithm for feedforward neural networks training. Penalty is a common and popular method for improving the generalization performance of networks. In this paper, a convergence theorem is proved for the online gradient learning algorithm with penalty, a term proportional to the magnitude of the weights. The monotonicity of the error function with such a penalty term is guaranteed during the training iteration. A key point of the proofs is the boundedness of the network weights, which is also a desired rewarding of adding penalty. |
| Starting Page | 107 |
| Ending Page | 111 |
| Page Count | 5 |
| File Format | PDF HTM / HTML |
| Alternate Webpage(s) | http://www.wseas.us/e-library/conferences/2006dallas/papers/519-366.pdf |
| Language | English |
| Access Restriction | Open |
| Content Type | Text |
| Resource Type | Article |