(Submitted on 20 Feb 2019 (v1), last revised 13 Mar 2019 (this version, v2))
Optimizing deep neural networks is largely thought to be an empirical process, requiring manual tuning of several hyper-parameters, such as learning rate, weight decay, and dropout rate. Arguably, the learning rate is the most important of these to tune, and this has gained more attention in recent works. In this paper, we propose a novel method to compute the learning rate for training deep neural networks with stochastic gradient descent. We first derive a theoretical framework to compute learning rates dynamically based on the Lipschitz constant of the loss function. We then extend this framework to other commonly used optimization algorithms, such as gradient descent with momentum and Adam. We run an extensive set of experiments that demonstrate the efficacy of our approach on popular architectures and datasets, and show that commonly used learning rates are an order of magnitude smaller than the ideal value.
|Comments:||v2; added more experiments and adaptive versions of other optimization algorithms|
|Subjects:||Machine Learning (cs.LG); Machine Learning (stat.ML)|
|Cite as:||arXiv:1902.07399 [cs.LG]|
|(or arXiv:1902.07399v2 [cs.LG] for this version)|
Souce Code [Keras]