G-Invariant Neural Networks

- This is the first tutorial of a series that aims to explain the relationship between causality and G-invariances:

In this tutorial we consider the design of

**G-invariant neural networks**, which are neural networks invariant to transformations (actions) of a transformation group.Target audience:

- Neural network enthusiasts (familiar with neurons and know how to implement a feedforward network)
- Linear algebra enthusiasts (familiar with eigenvectors, sum, multiplication, and vectorization of matrices)

- Method and further readings
- Our exposition follows Mouli & Ribeiro (ICLR 2021) for a few reasons:
- Mouli & Ribeiro (ICLR 2021) uses fundamental but easy-to-understand concepts:
- Group theory: Reynolds operator
- Linear algebra: Invariant subspaces, eigenvectors.

- We also recommend reading Yarotsky (2018) to better understand how (linear algebra) invariances become most-powerful neural networks.

- Mouli & Ribeiro (ICLR 2021) uses fundamental but easy-to-understand concepts:

- Our exposition follows Mouli & Ribeiro (ICLR 2021) for a few reasons:

**Note:**

- Our focus on invariances rather than equivariances is motivated by learning
**invariances for causal extrapolation tasks**- We will cover how invariances are related to causality in future tutorials.
- Early work on averaging transformations for invariances include Reisert, 2008; Skibbe, 2013; Manay et al., 2006; Kondor, 2007.
*Equivariances*: There is a lot of great work on equivariances, which are closely related and rely on the same basic principles we will cover:- Equivariance in 3D, manifolds and point clouds: E.g., Cohen & Welling (ICLR 2017), Cohen et al. (NeurIPS 2019), Fuchs et al. (2020), Dym & Mareon (2020), and other related work.
- Equivariance work for graph-type permutations: E.g., Maron et al. (ICLR 2019), Bloem-Reddy and Teh, (JMLR 2020) and related work.
- Equivariance through parameter sharing of Wood & Shawe-Taylor, (DAM 1996) and Ravanbakhsh et al. (ICML 2017).
- Equivariance and Convolution in Neural Networks Kondor & Trivedi (ICML 2018).

- Symmetries go beyond geometry: