A one-layer recurrent neural network with a discontinuous activation function for linear programming

Qingshan Liu, Jun Wang

Research output: Contribution to journalArticlepeer-review

79 Citations (Scopus)

Abstract

A one-layer recurrent neural network with a discontinuous activation function is proposed for linear programming. The number of neurons in the neural network is equal to that of decision variables in the linear programming problem. It is proven that the neural network with a sufficiently high gain is globally convergent to the optimal solution. Its application to linear assignment is discussed to demonstrate the utility of the neural network. Several simulation examples are given to show the effectiveness and characteristics of the neural network.

Original languageEnglish
Pages (from-to)1366-1383
Number of pages18
JournalNeural computation
Volume20
Issue number5
DOIs
Publication statusPublished - May 2008
Externally publishedYes

Fingerprint

Dive into the research topics of 'A one-layer recurrent neural network with a discontinuous activation function for linear programming'. Together they form a unique fingerprint.

Cite this