A one-layer recurrent neural network with a discontinuous hard-limiting activation function for quadratic programming

Qingshan Liu, Jun Wang

Research output: Contribution to journalArticlepeer-review

205 Citations (Scopus)

Abstract

In this paper, a one-layer recurrent neural network with a discontinuous hard-limiting activation function is proposed for quadratic programming. This neural network is capable of solving a large class of quadratic programming problems. The state variables of the neural network are proven to be globally stable and the output variables are proven to be convergent to optimal solutions as long as the objective function is strictly convex on a set defined by the equality constraints. In addition, a sequential quadratic programming approach based on the proposed recurrent neural network is developed for general nonlinear programming. Simulation results on numerical examples and support vector machine (SVM) learning show the effectiveness and performance of the neural network.

Original languageEnglish
Pages (from-to)558-570
Number of pages13
JournalIEEE Transactions on Neural Networks
Volume19
Issue number4
DOIs
Publication statusPublished - Apr 2008
Externally publishedYes

Keywords

  • Differential inclusion
  • Global convergence
  • Hard-limiting activation function
  • Lyapunov stability
  • Nonlinear programming
  • Quadratic programming
  • Recurrent neural network

Fingerprint

Dive into the research topics of 'A one-layer recurrent neural network with a discontinuous hard-limiting activation function for quadratic programming'. Together they form a unique fingerprint.

Cite this