A Complete Beginner's Guide to
G-Invariant Neural Networks

S. Chandra Mouli and Bruno Ribeiro $^\text{(C)}$¶

Purdue University¶


Video Tutorial

Preface¶

  • This is the first tutorial of a series that aims to explain the relationship between causality and G-invariances:
  • In this tutorial we consider the design of G-invariant neural networks, which are neural networks invariant to transformations (actions) of a transformation group.

  • Target audience:

    • Neural network enthusiasts (familiar with neurons and know how to implement a feedforward network)
    • Linear algebra enthusiasts (familiar with eigenvectors, sum, multiplication, and vectorization of matrices)
  • Method and further readings
    • Our exposition follows Mouli & Ribeiro (ICLR 2021) for a few reasons:
      • Mouli & Ribeiro (ICLR 2021) uses fundamental but easy-to-understand concepts:
        • Group theory: Reynolds operator
        • Linear algebra: Invariant subspaces, eigenvectors.
      • We also recommend reading Yarotsky (2018) to better understand how (linear algebra) invariances become most-powerful neural networks.

Note:

  • Symmetries go beyond geometry: