Abstract
Traditional image processing methods employing PDEs offer a multitude of meaningful regularizers along with valuable theoretical foundations for a wide range of image-related tasks. This makes their integration into neural networks a promising avenue. In this paper, we introduce a novel regularization approach inspired by the reverse process of PDE-based evolution models. Specifically, we propose inverse evolution layers (IELs), which serve as bad property amplifiers to penalize neural networks of which outputs have undesired characteristics. Using IELs, one can achieve specific regularization objectives and endow neural network outputs with corresponding properties of the PDE models. Our experiments, focusing on semantic segmentation tasks using heat-diffusion IELs, demonstrate their effectiveness in mitigating noisy label effects. Additionally, we develop curve-motion IELs to enforce convex shape regularization in neural network–based segmentation models for preventing the generation of concave outputs. Our results indicate that IELs may offer a potential regularization mechanism for addressing challenges related to noisy labels.
Original language | English |
---|---|
Journal | SIAM Journal on Mathematics of Data Science |
Volume | 7 |
Issue number | 1 |
Early online date | 3 Jan 2025 |
DOIs | |
Publication status | E-pub ahead of print - 3 Jan 2025 |
Keywords
- image segmentation
- physical-informed regularizers
- inverse evolution layers
- noisy label