"Representation Equivalent Neural Operators"
Recently, operator learning, or learning mappings between infinite-dimensional function spaces, has garnered significant attention, notably in relation to learning partial differential equations from data. While neural operators are conceptually straightforward, the necessary discretization for computational implementation often undermines their fidelity to the original mathematical operators, leading to discrepancies that can have tangible practical impacts.
This talk introduces a new take on neural operators, with a novel framework, Representation equivalent Neural Operators, allowing one to assess and handle these discrepancies. At its core is the concept of operator aliasing, which measures inconsistency between neural operators and their discrete representations. These concepts will be introduced and their practical applications will be discussed through the introduction of a novel, convolutional based neural operator based on the aforementioned framework.