Traditional 3D shape reconstruction techniques from multi-view images, such as structure from motion and multi-view stereo, primarily focus on opaque surfaces. Similarly, recent advances in neural radiance fields and its variants also primarily address opaque objects, encountering difficulties with the complex lighting effects caused by transparent materials. This paper introduces ⍺-NeuS, a new method for simultaneously reconstructing thin transparent objects and opaque objects based on neural implicit surfaces (NeuS). We prove that transparent and opaque surfaces align with either the non-negative local local minima or the zero iso-surface in the learned distance field of NeuS. Traditional iso-surfacing algorithms such as marching cubes, which rely on fixed iso-values, are ill-suited for this data. We develop a method to extract the transparent and opaque surface simultaneously based on DCUDF. To validate our approach, we construct a benchmark that includes both real-world and synthetic scenes, demonstrating its practical utility and effectiveness.
@inproceedings{zhang2024from,
title = {{From Transparent to Opaque: Rethinking Neural Implicit Surfaces with $\alpha$-NeuS}},
author = {Zhang, Haoran and Deng, Junkai and Chen, Xuhui and Hou, Fei and Wang, Wencheng and Qin, Hong and Qian, Chen and He, Ying},
booktitle = {Advances in Neural Information Processing Systems (NeurIPS)},
publisher = {Curran Associates, Inc.},
year = {2024},
}
Several excellent works were introduced concurrently with ours, addressing similar challenges:
⍺Surf tackles the same issue of transparent surface reconstruction. It utilizes a Plenoxel-based approach, which is very fast.
NU-NeRF also focuses on transparent surface reconstruction and additionally supports reflection and refraction.
We encourage readers to check out these works as well.