A general methodology for designing globally convergent optimization neural networks

Youshen Xia, Jun Wang

Результат исследований: Вклад в журналСтатьярецензирование

215 Цитирования (Scopus)

Аннотация

In this paper, we present a general methodology for designing optimization neural networks. We prove that the neural networks constructed by using the proposed method are guaranteed to be globally convergent to solutions of problems with bounded or unbounded solution sets, in contrast with the gradient methods whose convergence is not guaranteed. We show that the proposed method contains both the gradient methods and nongradient methods employed in existing optimization neural networks as special cases. Based on the theoretical results of the proposed method, we study the convergence and stability of general gradient models in case of unisolated solutions. Using the proposed method, we derive some new neural network models for a very large class of optimization problems, in which the equilibrium points correspond to exact solutions and there is no variable parameter. Finally, some numerical examples show the effectiveness of the method.

Язык оригиналаАнглийский
Страницы (с-по)1331-1343
Число страниц13
ЖурналIEEE Transactions on Neural Networks
Том9
Номер выпуска6
DOI
СостояниеОпубликовано - 1998
Опубликовано для внешнего пользованияДа

Fingerprint

Подробные сведения о темах исследования «A general methodology for designing globally convergent optimization neural networks». Вместе они формируют уникальный семантический отпечаток (fingerprint).

Цитировать