Identificador persistente para citar o vincular este elemento: http://hdl.handle.net/10553/117933
Título: LETHA: learning from high quality inputs for 3D pose estimation in low quality images
Autores/as: Penate-Sanchez, Adrian 
Moreno-Noguer, Francesc
Andrade-Cetto, Juan
Fleuret, François
Clasificación UNESCO: 1203 Ciencia de los ordenadores
Palabras clave: Pose estimation
Low resolution
Boosting
Fecha de publicación: 2014
Conferencia: 2014 2nd International Conference on 3D Vision
Resumen: We introduce LETHA (Learning on Easy data, Test on Hard), a new learning paradigm consisting of building strong priors from high quality training data, and combining them with discriminative machine learning to deal with lowquality test data. Our main contribution is an implementation of that concept for pose estimation. We first automatically build a 3D model of the object of interest from high-definition images, and devise from it a pose-indexed feature extraction scheme. We then train a single classifier to process these feature vectors. Given a low quality test image, we visit many hypothetical poses, extract features consistently and evaluate the response of the classifier. Since this process uses locations recorded during learning, it does not require matching points anymore. We use a boosting procedure to train this classifier common to all poses, which is able to deal with missing features, due in this context to self-occlusion. Our results demonstrate that the method combines the strengths of global image representations, discriminative even for very tiny images, and the robustness to occlusions of approaches based on local feature point descriptors.
URI: http://hdl.handle.net/10553/117933
ISBN: 978-1-4799-7000-1
ISSN: 1550-6185
DOI: 10.1109/3dv.2014.18
Fuente: 2014 2nd International Conference on 3D Vision, 14918659, 08-11 December 2014
Colección:Actas de congresos
Unknown (8,02 MB)
Vista completa

Google ScholarTM

Verifica

Altmetric


Comparte



Exporta metadatos



Los elementos en ULPGC accedaCRIS están protegidos por derechos de autor con todos los derechos reservados, a menos que se indique lo contrario.