A one-layer recurrent neural network for convex programming

Qingshan Liu, Jun Wang

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

7 Citations (Scopus)

Abstract

This paper presents a one-layer recurrent neural network for solving convex programming problems subject to linear equality and nonnegativity constraints. The number of neurons in the neural network is equal to that of decision variables in the optimization problem. Compared with the existing neural networks for optimization, the proposed neural network has lower model complexity. Moreover, the proposed neural network is proved to be globally convergent to the optimal solution(s) under some mild conditions. Simulation results show the effectiveness and performance of the proposed neural network.

Original languageEnglish
Title of host publication2008 International Joint Conference on Neural Networks, IJCNN 2008
Pages83-90
Number of pages8
DOIs
Publication statusPublished - 2008
Externally publishedYes
Event2008 International Joint Conference on Neural Networks, IJCNN 2008 - Hong Kong, China
Duration: 1 Jun 20088 Jun 2008

Publication series

NameProceedings of the International Joint Conference on Neural Networks

Conference

Conference2008 International Joint Conference on Neural Networks, IJCNN 2008
Country/TerritoryChina
CityHong Kong
Period1/06/088/06/08

Fingerprint

Dive into the research topics of 'A one-layer recurrent neural network for convex programming'. Together they form a unique fingerprint.

Cite this