Characterization of training errors in supervised learning using gradient-based rules

Jun Wang, B. Malakooti

Research output: Contribution to journalArticlepeer-review

8 Citations (Scopus)

Abstract

In the majority of the existing supervised learning paradigms, a neural network is trained by minimizing an error function using a learning rule. The commonly used learning rules are gradient-based learning rules such as the popular backpropagation algorithm. This paper addresses an important issue on error minimization in supervised learning of neural networks using gradient-based learning rules. This paper characterizes asymptotic properties of training errors for various forms of neural networks in supervised learning and discusses their practical implications for designing neural networks via remarks and examples. The analytical results presented in this paper reveal the dependency of quality of supervised learning on the rank of training samples and associated steady activation stales. The analytical results also reveal the complexity of achieving a zero training error.

Original languageEnglish
Pages (from-to)1073-1087
Number of pages15
JournalNeural Networks
Volume6
Issue number8
DOIs
Publication statusPublished - 1993
Externally publishedYes

Keywords

  • Gradient-based learning rule
  • Supervised learning
  • Training errors

Fingerprint

Dive into the research topics of 'Characterization of training errors in supervised learning using gradient-based rules'. Together they form a unique fingerprint.

Cite this