Finite-time convergent recurrent neural network with a hard-limiting activation function for constrained optimization with piecewise-linear objective functions

Qingshan Liu, Jun Wang

Research output: Contribution to journalArticlepeer-review

91 Citations (Scopus)

Abstract

This paper presents a one-layer recurrent neural network for solving a class of constrained nonsmooth optimization problems with piecewise-linear objective functions. The proposed neural network is guaranteed to be globally convergent in finite time to the optimal solutions under a mild condition on a derived lower bound of a single gain parameter in the model. The number of neurons in the neural network is the same as the number of decision variables of the optimization problem. Compared with existing neural networks for optimization, the proposed neural network has a couple of salient features such as finite-time convergence and a low model complexity. Specific models for two important special cases, namely, linear programming and nonsmooth optimization, are also presented. In addition, applications to the shortest path problem and constrained least absolute deviation problem are discussed with simulation results to demonstrate the effectiveness and characteristics of the proposed neural network.

Original languageEnglish
Article number5728927
Pages (from-to)601-613
Number of pages13
JournalIEEE Transactions on Neural Networks
Volume22
Issue number4
DOIs
Publication statusPublished - Apr 2011
Externally publishedYes

Keywords

  • Constrained optimization
  • convergence in finite time
  • global Lyapunov method
  • recurrent neural networks

Fingerprint

Dive into the research topics of 'Finite-time convergent recurrent neural network with a hard-limiting activation function for constrained optimization with piecewise-linear objective functions'. Together they form a unique fingerprint.

Cite this