Marketplace

equivariant-architecture-designer

Use when you have validated symmetry groups and need to design neural network architecture that respects those symmetries. Invoke when user mentions equivariant layers, G-CNN, e3nn, steerable networks, building symmetry into model, or needs architecture recommendations for specific symmetry groups. Provides architecture patterns and implementation guidance.

$ Installieren

git clone https://github.com/lyndonkl/claude /tmp/claude && cp -r /tmp/claude/skills/equivariant-architecture-designer ~/.claude/skills/claude

// tip: Run this command in your terminal to install the skill


name: equivariant-architecture-designer description: Use when you have validated symmetry groups and need to design neural network architecture that respects those symmetries. Invoke when user mentions equivariant layers, G-CNN, e3nn, steerable networks, building symmetry into model, or needs architecture recommendations for specific symmetry groups. Provides architecture patterns and implementation guidance.

Equivariant Architecture Designer

What Is It?

This skill helps you design neural network architectures that respect identified symmetry groups. Given a validated group specification, it recommends architecture patterns, specific libraries, and implementation strategies.

The payoff: Equivariant architectures have fewer parameters, train faster, generalize better, and are more robust to distribution shift.

Workflow

Copy this checklist and track your progress:

Architecture Design Progress:
- [ ] Step 1: Review group specification and requirements
- [ ] Step 2: Select architecture family
- [ ] Step 3: Choose specific layers and components
- [ ] Step 4: Design network topology
- [ ] Step 5: Select implementation library
- [ ] Step 6: Create architecture specification

Step 1: Review group specification and requirements

Gather the validated group specification. Confirm: which group(s) are involved, whether invariance or equivariance is needed, the data domain (images, point clouds, graphs, etc.), task type (classification, regression, generation), and any computational constraints. If group isn't specified, work with user to identify it first.

Step 2: Select architecture family

Match the symmetry group to an architecture family using Architecture Selection Guide. Key families: G-CNNs for discrete groups on grids, Steerable CNNs for continuous 2D groups, e3nn/NequIP for E(3) on point data, GNNs for permutation on graphs, DeepSets for permutation on sets. Consider trade-offs between expressiveness and efficiency.

Step 3: Choose specific layers and components

Select layer types based on Layer Patterns. For each layer decide: convolution type (regular, group, steerable), nonlinearity (must preserve equivariance - use gated, norm-based, or tensor product), normalization (batch norm breaks equivariance - use layer norm or equivariant batch norm), pooling (for invariant outputs: use invariant pooling; for equivariant: preserve structure). For detailed design methodology, see Methodology Details.

Step 4: Design network topology

Design the overall network structure: encoder architecture (how features are extracted), feature representations at each stage (irreps for Lie groups), pooling/aggregation strategy, output head matching task requirements. Use Topology Patterns for common designs. Balance depth vs. width for your group size.

Step 5: Select implementation library

Choose library based on Library Reference. Match to your group, framework preference (PyTorch/JAX), and performance needs. Popular choices: e3nn (E(3)/O(3), PyTorch), escnn (discrete groups, PyTorch), pytorch_geometric (permutation, PyTorch). Ensure library supports your specific group.

Step 6: Create architecture specification

Document the design using Output Template. Include: layer-by-layer specification, representation types, library dependencies, expected parameter count, and pseudo-code or actual code skeleton. This specification guides implementation and subsequent equivariance verification. For ready-to-use implementation templates, see Code Templates. Quality criteria for this output are defined in Quality Rubric.

Architecture Selection Guide

By Symmetry Group

GroupDomainRecommended ArchitectureLibrary
Cₙ, Dₙ2D ImagesG-CNN, Group Equivariant CNNescnn, e2cnn
SO(2), O(2)2D ImagesSteerable CNN, Harmonic Networksescnn
SO(3)SphericalSpherical CNNe3nn, s2cnn
SE(3), E(3)Point cloudsEquivariant GNN, Tensor Field Networkse3nn, NequIP
SₙSetsDeepSetspytorch, jax
SₙGraphsMessage Passing GNNpytorch_geometric
E(3) × SₙMoleculesE(3) Equivariant GNNe3nn, SchNet

By Task Type

TaskOutput TypeKey Consideration
ClassificationInvariant scalarUse invariant pooling
Regression (scalar)Invariant scalarSame as classification
SegmentationEquivariant per-pointPreserve equivariance to output
Force predictionEquivariant vectorOutput as l=1 irrep
Pose estimationEquivariant transformOutput rotation + translation
GenerationEquivariant structureEquivariant decoder

Layer Patterns

Equivariant Convolution Patterns

Standard G-Convolution:

(f ⋆ ψ)(g) = ∫_G f(h) ψ(g⁻Âčh) dh
  • Input: Feature map on group G
  • Kernel: Function on G
  • Output: Feature map on G

Steerable Convolution:

  • Uses steerable kernels that transform predictably
  • Parameterized by irreducible representations
  • More efficient for continuous groups

e3nn Tensor Product Layer:

# Combine features with different angular momenta
tp = o3.FullyConnectedTensorProduct(
    irreps_in1, irreps_in2, irreps_out
)
output = tp(input1, input2)

Equivariant Nonlinearities

Problem: Standard nonlinearities (ReLU, etc.) break equivariance.

Solutions:

TypeHow It WorksWhen to Use
Norm-basedApply nonlinearity to
GatedUse invariant to gate equivariantGeneral purpose
Tensor productNonlinearity via Clebsch-Gordane3nn, high-quality
Invariant featuresOnly apply to l=0 componentsSimple, fast

Equivariant Normalization

Batch Norm: Breaks equivariance (different stats per orientation) Solutions:

  • Layer Norm (normalize per sample)
  • Equivariant Batch Norm (normalize per irrep channel)
  • Instance Norm (often OK)

Pooling for Invariance

To get invariant output from equivariant features:

MethodFormulaWhen to Use
Mean poolingmean over groupContinuous groups
Sum poolingsum over elementsSets, graphs
Max poolingmax
Attention poolingweighted sumWhen importance varies

Topology Patterns

Encoder-Decoder (Segmentation, Generation)

Input → [Equiv. Encoder] → Latent (equiv.) → [Equiv. Decoder] → Output
  • Encoder: Progressive feature extraction
  • Latent: Equivariant representation
  • Decoder: Reconstruct with symmetry

Encoder-Pooling (Classification)

Input → [Equiv. Encoder] → Features (equiv.) → [Invariant Pool] → [MLP] → Class
  • Pool at the end to get invariant features
  • Final MLP operates on invariant representation

Message Passing (Graphs/Point Clouds)

Nodes → [MP Layer 1] → [MP Layer 2] → ... → [Aggregation] → Output
  • Each layer: aggregate neighbors, update node
  • Aggregation: sum/mean for invariance, per-node for equivariance

Library Reference

e3nn (PyTorch)

Groups: E(3), O(3), SO(3) Strengths: Full irrep support, tensor products, spherical harmonics Use for: Molecular modeling, 3D point clouds, physics

from e3nn import o3
irreps = o3.Irreps("2x0e + 2x1o + 1x2e")  # 2 scalars, 2 vectors, 1 tensor

escnn (PyTorch)

Groups: Discrete groups (Cₙ, Dₙ), continuous 2D (SO(2), O(2)) Strengths: Image processing, well-documented Use for: 2D images with rotation/reflection symmetry

from escnn import gspaces, nn
gspace = gspaces.rot2dOnR2(N=4)  # C4 rotation group

pytorch_geometric (PyTorch)

Groups: Permutation (Sₙ) Strengths: Graphs, batching, many GNN layers Use for: Graph classification/regression, node prediction

from torch_geometric.nn import GCNConv, global_mean_pool

Other Libraries

LibraryGroupsFrameworkNotes
NequIPE(3)PyTorchMolecular dynamics
MACEE(3)PyTorchMolecular potentials
jraphSₙJAXGraph networks
geomstatsLie groupsNumPy/PyTorchManifold learning

Output Template

ARCHITECTURE SPECIFICATION
==========================

Target Symmetry: [Group name and notation]
Symmetry Type: [Invariant/Equivariant]
Task: [Classification/Regression/etc.]
Domain: [Images/Point clouds/Graphs/etc.]

Architecture Family: [e.g., E(3) Equivariant GNN]
Library: [e.g., e3nn]

Layer Specification:
1. Input Layer
   - Input type: [e.g., 3D coordinates + features]
   - Representation: [e.g., positions (l=1) + scalars (l=0)]

2. [Layer Name]
   - Type: [Convolution/Tensor Product/Message Passing]
   - Input irreps: [specification]
   - Output irreps: [specification]
   - Nonlinearity: [Gated/Norm/None]

3. [Continue for each layer...]

N. Output Layer
   - Aggregation: [Mean/Sum/Attention]
   - Output: [Invariant scalar / Equivariant vector / etc.]

Estimated Parameters: [count]
Key Dependencies: [library versions]

Code Skeleton:
[Provide implementation outline or pseudo-code]

NEXT STEPS:
- Implement the architecture using the specified library
- Verify equivariance through numerical testing after implementation