Total Denoising: Unsupervised Learning of 3D Point Cloud Cleaning

Pedro Hermosilla

Ulm University

Tobias Ritschel

University College London

Timo Ropinski

Ulm University

IEEE/CVF International Conference on Computer Vision 2019

Abstract

We show that denoising of 3D point clouds can be learned unsupervised, directly from noisy 3D point cloud data only. This is achieved by extending recent ideas from learning of unsupervised image denoisers to unstructured 3D point clouds. Unsupervised image denoisers operate under the assumption that a noisy pixel observation is a random realization of a distribution around a clean pixel value, which allows appropriate learning on this distribution to eventually converge to the correct value. Regrettably, this assumption is not valid for unstructured points: 3D point clouds are subject to total noise, i. e., deviations in all coordinates, with no reliable pixel grid. Thus, an observation can be the realization of an entire manifold of clean 3D points, which makes a naïve extension of unsupervised image denoisers to 3D point clouds impractical. Overcoming this, we introduce a spatial prior term, that steers converges to the unique closest out of the many possible modes on a manifold. Our results demonstrate unsupervised denoising performance similar to that of supervised learning with clean data when given enough training examples - whereby we do not need any pairs of noisy and clean training data.

Bibtex

@inproceedings{hermosilla19totalnoise,
	title={Total Denoising: Unsupervised Learning of 3D Point Cloud Cleaning},
	author={Hermosilla, Pedro and Ritschel, Tobias and Ropinski, Timo},
	booktitle={Proceedings of 2019 IEEE/CVF International Conference on Computer Vision, ICCV 2019, Seoul, Korea (South), October 27 - November 2, 2019}
	year={2019},
	pages={52--60}
}