Please use this identifier to cite or link to this item: http://hdl.handle.net/10553/117933
Title: LETHA: learning from high quality inputs for 3D pose estimation in low quality images
Authors: Penate-Sanchez, Adrian 
Moreno-Noguer, Francesc
Andrade-Cetto, Juan
Fleuret, François
UNESCO Clasification: 1203 Ciencia de los ordenadores
Keywords: Pose estimation
Low resolution
Boosting
Issue Date: 2014
Conference: 2014 2nd International Conference on 3D Vision
Abstract: We introduce LETHA (Learning on Easy data, Test on Hard), a new learning paradigm consisting of building strong priors from high quality training data, and combining them with discriminative machine learning to deal with lowquality test data. Our main contribution is an implementation of that concept for pose estimation. We first automatically build a 3D model of the object of interest from high-definition images, and devise from it a pose-indexed feature extraction scheme. We then train a single classifier to process these feature vectors. Given a low quality test image, we visit many hypothetical poses, extract features consistently and evaluate the response of the classifier. Since this process uses locations recorded during learning, it does not require matching points anymore. We use a boosting procedure to train this classifier common to all poses, which is able to deal with missing features, due in this context to self-occlusion. Our results demonstrate that the method combines the strengths of global image representations, discriminative even for very tiny images, and the robustness to occlusions of approaches based on local feature point descriptors.
URI: http://hdl.handle.net/10553/117933
ISBN: 978-1-4799-7000-1
ISSN: 1550-6185
DOI: 10.1109/3dv.2014.18
Source: 2014 2nd International Conference on 3D Vision, 14918659, 08-11 December 2014
Appears in Collections:Actas de congresos
Unknown (8,02 MB)
Show full item record

Google ScholarTM

Check

Altmetric


Share



Export metadata



Items in accedaCRIS are protected by copyright, with all rights reserved, unless otherwise indicated.