Universal Approximations of Invariant Maps by Neural Networks

Research output: Contribution to journalArticlepeer-review

6 Citations (Scopus)


We describe generalizations of the universal approximation theorem for neural networks to maps invariant or equivariant with respect to linear representations of groups. Our goal is to establish network-like computational models that are both invariant/equivariant and provably complete in the sense of their ability to approximate any continuous invariant/equivariant map. Our contribution is three-fold. First, in the general case of compact groups we propose a construction of a complete invariant/equivariant network using an intermediate polynomial layer. We invoke classical theorems of Hilbert and Weyl to justify and simplify this construction; in particular, we describe an explicit complete ansatz for approximation of permutation-invariant maps. Second, we consider groups of translations and prove several versions of the universal approximation theorem for convolutional networks in the limit of continuous signals on euclidean spaces. Finally, we consider 2D signal transformations equivariant with respect to the group SE(2) of rigid euclidean motions. In this case we introduce the “charge–conserving convnet”—a convnet-like computational model based on the decomposition of the feature space into isotypic representations of SO(2). We prove this model to be a universal approximator for continuous SE(2)—equivariant signal transformations.

Original languageEnglish
JournalConstructive Approximation
Publication statusPublished - 2021


  • Approximation
  • Convnet
  • Equivariance
  • Invariance
  • Linear representation
  • Neural network
  • Polarization
  • Polynomial


Dive into the research topics of 'Universal Approximations of Invariant Maps by Neural Networks'. Together they form a unique fingerprint.

Cite this