A one-layer recurrent neural network for constrained nonsmooth invex optimization

Guocheng Li, Zheng Yan, Jun Wang

Research output: Contribution to journalArticlepeer-review

48 Citations (Scopus)

Abstract

Invexity is an important notion in nonconvex optimization. In this paper, a one-layer recurrent neural network is proposed for solving constrained nonsmooth invex optimization problems, designed based on an exact penalty function method. It is proved herein that any state of the proposed neural network is globally convergent to the optimal solution set of constrained invex optimization problems, with a sufficiently large penalty parameter. In addition, any neural state is globally convergent to the unique optimal solution, provided that the objective function and constraint functions are pseudoconvex. Moreover, any neural state is globally convergent to the feasible region in finite time and stays there thereafter. The lower bounds of the penalty parameter and convergence time are also estimated. Two numerical examples are provided to illustrate the performances of the proposed neural network.

Original languageEnglish
Pages (from-to)79-89
Number of pages11
JournalNeural Networks
Volume50
DOIs
Publication statusPublished - Feb 2014
Externally publishedYes

Keywords

  • Exact penalty function
  • Finite time convergence
  • Invex optimization
  • Recurrent neural network

Fingerprint

Dive into the research topics of 'A one-layer recurrent neural network for constrained nonsmooth invex optimization'. Together they form a unique fingerprint.

Cite this