Identificador persistente para citar o vincular este elemento:
http://hdl.handle.net/10553/130228
Título: | NeRFLight: fast and light neural radiance fields using a shared feature grid | Autores/as: | Rivas-Manzaneque, Fernando Sierra-Acosta, Jorge Peñate Sánchez, Adrián Moreno-Noguer, Francesc Ribeiro, Angela |
Clasificación UNESCO: | 1203 Ciencia de los ordenadores | Palabras clave: | 3D from multi-view and sensors | Fecha de publicación: | 2023 | Editor/a: | Institute of Electrical and Electronics Engineers (IEEE) | Publicación seriada: | Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition | Conferencia: | IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) | Resumen: | While original Neural Radiance Fields (NeRF) have shown impressive results in modeling the appearance of a scene with compact MLP architectures, they are not able to achieve real-time rendering. This has been recently addressed by either baking the outputs of NeRF into a data structure or arranging trainable parameters in an explicit feature grid. These strategies, however, significantly increase the memory footprint of the model which prevents their deployment on bandwidth-constrained applications. In this paper, we extend the grid-based approach to achieve real-time view synthesis at more than 150 FPS using a lightweight model. Our main contribution is a novel architecture in which the density field of NeRF-based representations is split into N regions and the density is modeled using N different decoders which reuse the same feature grid. This results in a smaller grid where each feature is located in more than one spatial position, forcing them to learn a compact representation that is valid for different parts of the scene. We further reduce the size of the final model by disposing of the features symmetrically on each region, which favors feature pruning after training while also allowing smooth gradient transitions between neighboring voxels. An exhaustive evaluation demonstrates that our method achieves real-time performance and quality metrics on a pair with state-of-the-art with an improvement of more than 2× in the FPS/MB ratio. | URI: | http://hdl.handle.net/10553/130228 | ISBN: | 979-8-3503-0129-8 979-8-3503-0130-4 |
ISSN: | 1063-6919 | DOI: | 10.1109/CVPR52729.2023.01195 | Fuente: | IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2023. [ISSN: 2575-7075], (7-24 June 2023). |
Colección: | Actas de congresos |
Citas SCOPUSTM
2
actualizado el 17-nov-2024
Citas de WEB OF SCIENCETM
Citations
1
actualizado el 17-nov-2024
Google ScholarTM
Verifica
Altmetric
Comparte
Exporta metadatos
Los elementos en ULPGC accedaCRIS están protegidos por derechos de autor con todos los derechos reservados, a menos que se indique lo contrario.