Identificador persistente para citar o vincular este elemento:
http://hdl.handle.net/10553/134400
Campo DC | Valor | idioma |
---|---|---|
dc.contributor.author | Mazzucchelli, Alessio | - |
dc.contributor.author | Garcia-Garcia, Adrian | - |
dc.contributor.author | Garces, Elena | - |
dc.contributor.author | Rivas-Manzaneque, Fernando | - |
dc.contributor.author | Moreno-Noguer, Francesc | - |
dc.contributor.author | Peñate Sánchez, Adrián | - |
dc.date.accessioned | 2024-10-10T17:29:13Z | - |
dc.date.available | 2024-10-10T17:29:13Z | - |
dc.date.issued | 2024 | - |
dc.identifier.isbn | 979-8-3503-5301-3 | - |
dc.identifier.issn | 1063-6919 | - |
dc.identifier.other | Scopus | - |
dc.identifier.uri | http://hdl.handle.net/10553/134400 | - |
dc.description.abstract | Advances in NERFs have allowed for 3D scene reconstructions and novel view synthesis. Yet, efficiently editing these representations while retaining photo realism is an emerging challenge. Recent methods face three primary limitations: they're slow for interactive use, lack precision at object boundaries, and struggle to ensure multi-view consistency. We introduce IReNe to address these limitations, enabling swift, near real-time color editing in NeRF. Leveraging a pre-trained NeRF model and a single training image with user-applied color edits, IReNe swiftly adjusts network parameters in seconds. This adjustment allows the model to generate new scene views, accurately representing the color changes from the training image while also controlling object boundaries and view-specific effects. Object boundary control is achieved by integrating a trainable segmentation module into the model. The process gains efficiency by retraining only the weights of the last network layer. We observed that neurons in this layer can be classified into those responsible for view-dependent appearance and those contributing to diffuse appearance. We introduce an automated classification approach to identify these neuron types and exclusively fine-tune the weights of the diffuse neurons. This further accelerates training and ensures consistent color edits across different views. A thorough validation on a new dataset, with edited object colors, shows significant quantitative and qualitative advancements over competitors, accelerating speeds by 5x to 500x. | - |
dc.language | eng | - |
dc.publisher | Institute of Electrical and Electronics Engineers (IEEE) | - |
dc.relation.ispartof | IEEE Computer Society Conference on Computer Vision and Pattern Recognition workshops | - |
dc.source | IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), p. 5937-5946. (16-22 June 2024) | - |
dc.subject | 1203 Ciencia de los ordenadores | - |
dc.subject.other | Color | - |
dc.subject.other | Deep Learning | - |
dc.subject.other | Editing | - |
dc.subject.other | Editing In Neural Radiance Field | - |
dc.subject.other | Nerf | - |
dc.subject.other | Neural Radiance Field | - |
dc.subject.other | Recoloring | - |
dc.subject.other | Segmentation | - |
dc.title | IReNe: Instant Recoloring of Neural Radiance Fields | - |
dc.type | conference_paper | - |
dc.relation.conference | IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) | - |
dc.identifier.doi | 10.1109/CVPR52733.2024.00567 | - |
dc.identifier.scopus | 85207306623 | - |
dc.contributor.orcid | NO DATA | - |
dc.contributor.orcid | NO DATA | - |
dc.contributor.orcid | NO DATA | - |
dc.contributor.orcid | NO DATA | - |
dc.contributor.orcid | NO DATA | - |
dc.contributor.orcid | NO DATA | - |
dc.contributor.authorscopusid | 58086196400 | - |
dc.contributor.authorscopusid | 59181539900 | - |
dc.contributor.authorscopusid | 55785453700 | - |
dc.contributor.authorscopusid | 58533942600 | - |
dc.contributor.authorscopusid | 24076818700 | - |
dc.contributor.authorscopusid | 26421312300 | - |
dc.identifier.eissn | 2575-7075 | - |
dc.description.lastpage | 5946 | - |
dc.description.firstpage | 5937 | - |
dc.investigacion | Ciencias | - |
dc.type2 | Actas de congresos | - |
dc.description.notas | Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2024 (Reconocido por la ANECA como equivalente a Q1 JCR) | - |
dc.description.numberofpages | 10 | - |
dc.identifier.eisbn | 979-8-3503-5300-6 | - |
dc.utils.revision | Sí | - |
dc.date.coverdate | June 2024 | - |
dc.identifier.conferenceid | events155498 | - |
dc.identifier.ulpgc | Sí | - |
dc.contributor.buulpgc | BU-INF | - |
item.grantfulltext | open | - |
item.fulltext | Con texto completo | - |
crisitem.event.eventsstartdate | 14-04-2023 | - |
crisitem.event.eventsenddate | 15-04-2023 | - |
crisitem.author.dept | GIR SIANI: Inteligencia Artificial, Redes Neuronales, Aprendizaje Automático e Ingeniería de Datos | - |
crisitem.author.dept | IU Sistemas Inteligentes y Aplicaciones Numéricas | - |
crisitem.author.dept | Departamento de Informática y Sistemas | - |
crisitem.author.orcid | 0000-0003-2876-3301 | - |
crisitem.author.parentorg | IU Sistemas Inteligentes y Aplicaciones Numéricas | - |
crisitem.author.fullName | Peñate Sánchez, Adrián | - |
Colección: | Actas de congresos |
Los elementos en ULPGC accedaCRIS están protegidos por derechos de autor con todos los derechos reservados, a menos que se indique lo contrario.