Loading...
Please wait, while we are loading the content...
Similar Documents
L G ] 1 0 O ct 2 01 9 How Does Batch Normalization Help Binary Training ?
| Content Provider | Semantic Scholar |
|---|---|
| Author | Sari, Eyyüb Belbahri, Mouloud Partovi, Vahid Nia, Vahid Partovi |
| Copyright Year | 2019 |
| Abstract | Binary Neural Networks (BNNs) are difficult to train, and suffer from drop of accuracy. It appears in practice that BNNs fail to train in the absence of Batch Normalization (BatchNorm) layer. We find the main role of BatchNorm is to avoid exploding gradients in the case of BNNs. This finding suggests that the common initialization methods developed for full-precision networks are irrelevant to BNNs. We build a theoretical study on the role of BatchNorm in binary training, backed up by numerical experiments. |
| File Format | PDF HTM / HTML |
| Alternate Webpage(s) | http://export.arxiv.org/pdf/1909.09139 |
| Language | English |
| Access Restriction | Open |
| Content Type | Text |
| Resource Type | Article |