Please use this identifier to cite or link to this item: http://hdl.handle.net/10553/130228
DC FieldValueLanguage
dc.contributor.authorRivas-Manzaneque, Fernandoen_US
dc.contributor.authorSierra-Acosta, Jorgeen_US
dc.contributor.authorPeñate Sánchez, Adriánen_US
dc.contributor.authorMoreno-Noguer, Francescen_US
dc.contributor.authorRibeiro, Angelaen_US
dc.date.accessioned2024-05-08T15:30:28Z-
dc.date.available2024-05-08T15:30:28Z-
dc.date.issued2023en_US
dc.identifier.isbn979-8-3503-0129-8en_US
dc.identifier.isbn979-8-3503-0130-4-
dc.identifier.issn1063-6919en_US
dc.identifier.urihttp://hdl.handle.net/10553/130228-
dc.description.abstractWhile original Neural Radiance Fields (NeRF) have shown impressive results in modeling the appearance of a scene with compact MLP architectures, they are not able to achieve real-time rendering. This has been recently addressed by either baking the outputs of NeRF into a data structure or arranging trainable parameters in an explicit feature grid. These strategies, however, significantly increase the memory footprint of the model which prevents their deployment on bandwidth-constrained applications. In this paper, we extend the grid-based approach to achieve real-time view synthesis at more than 150 FPS using a lightweight model. Our main contribution is a novel architecture in which the density field of NeRF-based representations is split into N regions and the density is modeled using N different decoders which reuse the same feature grid. This results in a smaller grid where each feature is located in more than one spatial position, forcing them to learn a compact representation that is valid for different parts of the scene. We further reduce the size of the final model by disposing of the features symmetrically on each region, which favors feature pruning after training while also allowing smooth gradient transitions between neighboring voxels. An exhaustive evaluation demonstrates that our method achieves real-time performance and quality metrics on a pair with state-of-the-art with an improvement of more than 2× in the FPS/MB ratio.en_US
dc.languageengen_US
dc.publisherInstitute of Electrical and Electronics Engineers (IEEE)en_US
dc.relation.ispartofProceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognitionen_US
dc.sourceIEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2023. [ISSN: 2575-7075], (7-24 June 2023).en_US
dc.subject1203 Ciencia de los ordenadoresen_US
dc.subject.other3D from multi-view and sensorsen_US
dc.titleNeRFLight: fast and light neural radiance fields using a shared feature griden_US
dc.typeConference Paperen_US
dc.relation.conferenceIEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)en_US
dc.identifier.doi10.1109/CVPR52729.2023.01195en_US
dc.identifier.scopus2-s2.0-85173909733-
dc.contributor.orcid#NODATA#-
dc.contributor.orcid#NODATA#-
dc.contributor.orcid#NODATA#-
dc.contributor.orcid#NODATA#-
dc.contributor.orcid#NODATA#-
dc.identifier.eissn2575-7075-
dc.investigacionIngeniería y Arquitecturaen_US
dc.type2Actas de congresosen_US
dc.description.numberofpages11en_US
dc.utils.revisionen_US
dc.date.coverdateJune 2023en_US
dc.identifier.ulpgcen_US
dc.contributor.buulpgcBU-INFen_US
dc.description.sjr10,331
dc.description.sjrq-
item.grantfulltextnone-
item.fulltextSin texto completo-
crisitem.event.eventsstartdate18-06-2018-
crisitem.event.eventsenddate22-06-2018-
crisitem.author.deptGIR SIANI: Inteligencia Artificial, Redes Neuronales, Aprendizaje Automático e Ingeniería de Datos-
crisitem.author.deptIU Sistemas Inteligentes y Aplicaciones Numéricas-
crisitem.author.deptDepartamento de Informática y Sistemas-
crisitem.author.orcid0000-0003-2876-3301-
crisitem.author.parentorgIU Sistemas Inteligentes y Aplicaciones Numéricas-
crisitem.author.fullNamePeñate Sánchez, Adrián-
Appears in Collections:Actas de congresos
Show simple item record

SCOPUSTM   
Citations

2
checked on Nov 17, 2024

WEB OF SCIENCETM
Citations

1
checked on Nov 17, 2024

Google ScholarTM

Check

Altmetric


Share



Export metadata



Items in accedaCRIS are protected by copyright, with all rights reserved, unless otherwise indicated.