Error bounds for approximations with deep ReLU networks

    Research output: Contribution to journalArticlepeer-review

    312 Citations (Scopus)

    Abstract

    We study expressive power of shallow and deep neural networks with piece-wise linear activation functions. We establish new rigorous upper and lower bounds for the network complexity in the setting of approximations in Sobolev spaces. In particular, we prove that deep ReLU networks more efficiently approximate smooth functions than shallow networks. In the case of approximations of 1D Lipschitz functions we describe adaptive depth-6 network architectures more efficient than the standard shallow architecture.

    Original languageEnglish
    Pages (from-to)103-114
    Number of pages12
    JournalNeural Networks
    Volume94
    DOIs
    Publication statusPublished - Oct 2017

    Keywords

    • Approximation complexity
    • Deep ReLU networks

    Fingerprint

    Dive into the research topics of 'Error bounds for approximations with deep ReLU networks'. Together they form a unique fingerprint.

    Cite this