Finite-time convergent recurrent neural network with a hard-limiting activation function for constrained optimization with piecewise-linear objective functions

Qingshan Liu, Jun Wang

Результат исследований: Вклад в журналСтатьярецензирование

91 Цитирования (Scopus)

Аннотация

This paper presents a one-layer recurrent neural network for solving a class of constrained nonsmooth optimization problems with piecewise-linear objective functions. The proposed neural network is guaranteed to be globally convergent in finite time to the optimal solutions under a mild condition on a derived lower bound of a single gain parameter in the model. The number of neurons in the neural network is the same as the number of decision variables of the optimization problem. Compared with existing neural networks for optimization, the proposed neural network has a couple of salient features such as finite-time convergence and a low model complexity. Specific models for two important special cases, namely, linear programming and nonsmooth optimization, are also presented. In addition, applications to the shortest path problem and constrained least absolute deviation problem are discussed with simulation results to demonstrate the effectiveness and characteristics of the proposed neural network.

Язык оригиналаАнглийский
Номер статьи5728927
Страницы (с-по)601-613
Число страниц13
ЖурналIEEE Transactions on Neural Networks
Том22
Номер выпуска4
DOI
СостояниеОпубликовано - апр. 2011
Опубликовано для внешнего пользованияДа

Fingerprint

Подробные сведения о темах исследования «Finite-time convergent recurrent neural network with a hard-limiting activation function for constrained optimization with piecewise-linear objective functions». Вместе они формируют уникальный семантический отпечаток (fingerprint).

Цитировать