Please use this identifier to cite or link to this item:
http://hdl.handle.net/10553/117929
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Rubio, A. | en_US |
dc.contributor.author | Villamizar, M. | en_US |
dc.contributor.author | Ferraz, L. | en_US |
dc.contributor.author | Penate-Sanchez, Adrian | en_US |
dc.contributor.author | Ramisa, A. | en_US |
dc.contributor.author | Simo-Serra, E. | en_US |
dc.contributor.author | Sanfeliu, A. | en_US |
dc.contributor.author | Moreno-Noguer, F. | en_US |
dc.date.accessioned | 2022-09-07T18:01:59Z | - |
dc.date.available | 2022-09-07T18:01:59Z | - |
dc.date.issued | 2015 | en_US |
dc.identifier.isbn | 978-1-4799-6923-4 | en_US |
dc.identifier.issn | 1050-4729 | en_US |
dc.identifier.uri | http://hdl.handle.net/10553/117929 | - |
dc.description.abstract | We propose a robust and efficient method to estimate the pose of a camera with respect to complex 3D textured models of the environment that can potentially contain more than 100; 000 points. To tackle this problem we follow a top down approach where we combine high-level deep network classifiers with low level geometric approaches to come up with a solution that is fast, robust and accurate. Given an input image, we initially use a pre-trained deep network to compute a rough estimation of the camera pose. This initial estimate constrains the number of 3D model points that can be seen from the camera viewpoint. We then establish 3D-to-2D correspondences between these potentially visible points of the model and the 2D detected image features. Accurate pose estimation is finally obtained from the 2D-to-3D correspondences using a novel PnP algorithm that rejects outliers without the need to use a RANSAC strategy, and which is between 10 and 100 times faster than other methods that use it. Two real experiments dealing with very large and complex 3D models demonstrate the effectiveness of the approach. | en_US |
dc.language | eng | en_US |
dc.relation.ispartof | Proceedings - IEEE International Conference on Robotics and Automation | en_US |
dc.source | IEEE International Conference on Robotics and Automation (ICRA), 15285966, (02 July 2015) | en_US |
dc.subject | 1203 Ciencia de los ordenadores | en_US |
dc.subject | 1206 Análisis numérico | en_US |
dc.subject.other | Three-dimensional displays | en_US |
dc.subject.other | Solid modeling | en_US |
dc.subject.other | Estimation | en_US |
dc.subject.other | Computational modeling | en_US |
dc.subject.other | Cameras | en_US |
dc.subject.other | Training | en_US |
dc.subject.other | Feature extraction | en_US |
dc.title | Efficient monocular pose estimation for complex 3D models | en_US |
dc.type | info:eu-repo/semantics/conferenceobject | en_US |
dc.type | Conference proceedings | en_US |
dc.relation.conference | 2015 IEEE International Conference on Robotics and Automation (ICRA) | en_US |
dc.identifier.doi | 10.1109/ICRA.2015.7139372 | en_US |
dc.identifier.scopus | 2-s2.0-84938262734 | - |
dc.identifier.isi | WOS:000370974901058 | - |
dc.contributor.orcid | 0000-0003-2876-3301 | - |
dc.identifier.issue | June | - |
dc.relation.volume | 15285966 | en_US |
dc.investigacion | Ingeniería y Arquitectura | en_US |
dc.type2 | Actas de congresos | en_US |
dc.identifier.external | 67238849 | - |
dc.utils.revision | Sí | en_US |
dc.date.coverdate | July 2015 | en_US |
dc.identifier.ulpgc | Sí | en_US |
dc.contributor.buulpgc | BU-INF | en_US |
item.grantfulltext | open | - |
item.fulltext | Con texto completo | - |
crisitem.author.dept | GIR SIANI: Inteligencia Artificial, Redes Neuronales, Aprendizaje Automático e Ingeniería de Datos | - |
crisitem.author.dept | IU Sistemas Inteligentes y Aplicaciones Numéricas | - |
crisitem.author.dept | Departamento de Informática y Sistemas | - |
crisitem.author.orcid | 0000-0003-2876-3301 | - |
crisitem.author.parentorg | IU Sistemas Inteligentes y Aplicaciones Numéricas | - |
crisitem.author.fullName | Peñate Sánchez, Adrián | - |
Appears in Collections: | Actas de congresos |
Items in accedaCRIS are protected by copyright, with all rights reserved, unless otherwise indicated.