A one-layer recurrent neural network for constrained nonconvex optimization

Guocheng Li, Zheng Yan, Jun Wang

Research output: Contribution to journalArticlepeer-review

61 Citations (Scopus)

Abstract

In this paper, a one-layer recurrent neural network is proposed for solving nonconvex optimization problems subject to general inequality constraints, designed based on an exact penalty function method. It is proved herein that any neuron state of the proposed neural network is convergent to the feasible region in finite time and stays there thereafter, provided that the penalty parameter is sufficiently large. The lower bounds of the penalty parameter and convergence time are also estimated. In addition, any neural state of the proposed neural network is convergent to its equilibrium point set which satisfies the Karush-Kuhn-Tucker conditions of the optimization problem. Moreover, the equilibrium point set is equivalent to the optimal solution to the nonconvex optimization problem if the objective function and constraints satisfy given conditions. Four numerical examples are provided to illustrate the performances of the proposed neural network.

Original languageEnglish
Pages (from-to)10-21
Number of pages12
JournalNeural Networks
Volume61
DOIs
Publication statusPublished - 1 Jan 2015
Externally publishedYes

Keywords

  • Exact penalty function
  • Finite time convergence
  • Nonconvex optimization
  • Recurrent neural network

Fingerprint

Dive into the research topics of 'A one-layer recurrent neural network for constrained nonconvex optimization'. Together they form a unique fingerprint.

Cite this