Loading...
Please wait, while we are loading the content...
Modifying Training Algorithms for Improved Fault Tolerance
| Content Provider | Semantic Scholar |
|---|---|
| Author | Chiu, C. K. Mehrotra, Kishan Mohany, Chilukuri K. School, Sanjay Rankaz |
| Copyright Year | 2007 |
| Abstract | This paper presents three approaches to improve fault tolerance of neural networks. In two approaches , the traditional backpropagation training algorithm is itself modiied so that the trained networks have improved fault tolerance; we achieve better results than others 1, 10] who had also explored this possibility. Our rst method is to coerce weights to have low magnitudes, during the training process , since fault tolerance is degraded by the use of high magnitude weights; at the same time, additional hidden nodes are added dynamically to the network to ensure desired performance can be reached. Our second method is to add artiicial faults to various components (nodes and links) of a network during training. This leads to the development of networks that perform well even when faults occur in the network. The third method repeatedly eliminates nodes of least sensitivity, then \splits" the most sensitive nodes and retrains the system. This generally results in the best performance, although it requires a small amount of additional retraining after a network is built. Experimental results have shown that these methods can obtain better robustness than backpropagation training, and compare favorably with other approaches 1, 10]. |
| File Format | PDF HTM / HTML |
| Language | English |
| Access Restriction | Open |
| Subject Keyword | Algorithm Artificial neural network Backpropagation Fault tolerance Neural Network Simulation Weight |
| Content Type | Text |
| Resource Type | Article |