Table of Contents
Dejia Xu, Yifan Jiang, Peihao Wang, Zhiwen Fan Zhangyang Wang
NeurIPS 2022
Implicit Representation Processing with Differential Operators
Suppose we already acquire an Implicit Neural Representation (INR) $\Phi: \mathbb{R}^m \rightarrow \mathbb{R}$, now we are interested in whether we can run a signal processing program on the implicitly represented signals.
- One straightforward solution is to rasterize the implicit field with a 2D/3D lattice and run a typical kernel on the pixel/voxel grids.
However, this decoding strategy produces a finite resolution and discretizes signals, which is memory inefficient and unfriendly to modeling fine details.
Computational Paradigm
- $\mathbf{\mathcal{A}}$ : Proposed signal processing operator;
- $\mathbf{\Phi}$ : Input INR;
- $\mathbf{\Psi}$ : Resultant INR processed by operator $\mathbf{\mathcal{A}}$ : $\Psi=\mathcal{A} \Phi: \mathbb{R}^m \rightarrow \mathbb{R}$
$$\Psi(\boldsymbol{x}):=\mathcal{A} \Phi(\boldsymbol{x})=\Pi\left(\Phi(\boldsymbol{x}), \nabla \Phi(\boldsymbol{x}), \nabla^2 \Phi(\boldsymbol{x}), \cdots, \nabla^k \Phi(\boldsymbol{x}), \cdots\right)$$
- $\Pi$ : $\mathbb{R}\rightarrow \mathbb{R}$ : arbitary continuous function which can be either handcrafted or learned from data(MLP).
- The input dimension of $\Pi$ depends on the highest order of used derivatives.
Trainable Parameters:
- Only The parameters in fusion layer $\Pi$
Experiment
- Low-level Vision task
-
Edge detection
-
Image Denoising
-
Image Blurring
-
Image Deblurring
-
Image Inpanting
-
- Geometry Processing on SDF
-
Smoothen
-
- High-level Vision task
-
Classification on ImageNet
-