Header menu link for other important links
X
DANTE: Deep alternations for training neural networks
V.B. Sinha, S. Kudugunta, A.R. Sankar, S.T. Chavali,
Published in Elsevier Ltd
2020
PMID: 32771843
Volume: 131
   
Pages: 127 - 143
Abstract
We present DANTE, a novel method for training neural networks using the alternating minimization principle. DANTE provides an alternate perspective to traditional gradient-based backpropagation techniques commonly used to train deep networks. It utilizes an adaptation of quasi-convexity to cast training a neural network as a bi-quasi-convex optimization problem. We show that for neural network configurations with both differentiable (e.g. sigmoid) and non-differentiable (e.g. ReLU) activation functions, we can perform the alternations effectively in this formulation. DANTE can also be extended to networks with multiple hidden layers. In experiments on standard datasets, neural networks trained using the proposed method were found to be promising and competitive to traditional backpropagation techniques, both in terms of quality of the solution, as well as training speed. © 2020 Elsevier Ltd
About the journal
JournalData powered by TypesetNeural Networks
PublisherData powered by TypesetElsevier Ltd
ISSN08936080