On the line-search gradient methods for stochastic optimization

Darina Dvinskikh, Aleksandr Ogaltsov, Alexander Gasnikov, Pavel Dvurechensky, Vladimir Spokoiny

Research output: Contribution to journalConference articlepeer-review

4 Citations (Scopus)


We consider several line-search based gradient methods for stochastic optimization: a gradient and accelerated gradient methods for convex optimization and gradient method for non-convex optimization. The methods simultaneously adapt to the unknown Lipschitz constant of the gradient and variance of the stochastic approximation for the gradient. The focus of this paper is to numerically compare such methods with state-of-the-art adaptive methods which are based on a different idea of taking norm of the stochastic gradient to define the stepsize, e.g., AdaGrad and Adam.

Original languageEnglish
Pages (from-to)1715-1720
Number of pages6
Publication statusPublished - 2020
Externally publishedYes
Event21st IFAC World Congress 2020 - Berlin, Germany
Duration: 12 Jul 202017 Jul 2020


  • Adaptive method
  • Complexity bounds
  • Convex and non-convex optimization
  • First-order method
  • Gradient descent
  • Mini-batch
  • Stochastic optimization


Dive into the research topics of 'On the line-search gradient methods for stochastic optimization'. Together they form a unique fingerprint.

Cite this