Convergence of a recurrent neural network for nonconvex optimization based on an augmented lagrangian function

Xiaolin Hu, Jun Wang

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

8 Citations (Scopus)

Abstract

In the paper, a recurrent neural network based on an augmented Lagrangian function is proposed for seeking local minima of nonconvex optimization problems with inequality constraints. First, each equilibrium point of the neural network corresponds to a Karush-KuhnTucker (KKT) point of the problem. Second, by appropriately choosing a control parameter, the neural network is asymptotically stable at those local minima satisfying some mild conditions. The latter property of the neural network is ensured by the convexification capability of the augmented Lagrangian function. The proposed scheme is inspired by many existing neural networks in the literature and can be regarded as an extension or improved version of them. A simulation example is discussed to illustrate the results.

Original languageEnglish
Title of host publicationAdvances in Neural Networks - ISNN 2007 - 4th International Symposium on Neural Networks, ISNN 2007, Proceedings
PublisherSpringer Verlag
Pages194-203
Number of pages10
EditionPART 3
ISBN (Print)9783540723943
DOIs
Publication statusPublished - 2007
Externally publishedYes
Event4th International Symposium on Neural Networks, ISNN 2007 - Nanjing, China
Duration: 3 Jun 20077 Jun 2007

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
NumberPART 3
Volume4493 LNCS
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Conference

Conference4th International Symposium on Neural Networks, ISNN 2007
Country/TerritoryChina
CityNanjing
Period3/06/077/06/07

Fingerprint

Dive into the research topics of 'Convergence of a recurrent neural network for nonconvex optimization based on an augmented lagrangian function'. Together they form a unique fingerprint.

Cite this