Please use this identifier to cite or link to this item: http://hdl.handle.net/10553/130230
DC FieldValueLanguage
dc.contributor.authorTinchev, Georgien_US
dc.contributor.authorPenate-Sanchez, Adrianen_US
dc.contributor.authorFallon, Mauriceen_US
dc.date.accessioned2024-05-08T18:33:09Z-
dc.date.available2024-05-08T18:33:09Z-
dc.date.issued2019en_US
dc.identifier.issn2377-3766en_US
dc.identifier.urihttp://hdl.handle.net/10553/130230-
dc.description.abstractLocalization in challenging, natural environments such as forests or woodlands is an important capability for many applications from guiding a robot navigating along a forest trail to monitoring vegetation growth with handheld sensors. In this work we explore laser-based localization in both urban and natural environments, which is suitable for online applications. We propose a deep learning approach capable of learning meaningful descriptors directly from 3D point clouds by comparing triplets (anchor, positive and negative examples). The approach learns a feature space representation for a set of segmented point clouds that are matched between a current and previous observations. Our learning method is tailored towards loop closure detection resulting in a small model which can be deployed using only a CPU. The proposed learning method would allow the full pipeline to run on robots with limited computational payload such as drones, quadrupeds or UGVs.en_US
dc.languageengen_US
dc.publisherInstitute of Electrical and Electronics Engineers (IEEE)en_US
dc.relation.ispartofIEEE Robotics and Automation Lettersen_US
dc.sourceIEEE Robotics and Automation Letters, [ISSN: 2377-3766], vol. 4 (2), ( 2019)en_US
dc.subject1203 Ciencia de los ordenadoresen_US
dc.subject.otherLocalizationen_US
dc.subject.otherDeep learning in robotics and automationen_US
dc.subject.otherVisual learningen_US
dc.subject.otherSLAMen_US
dc.subject.otherField Robotsen_US
dc.titleLearning to see the wood for the trees: deep laser localization in urban and natural environments on a CPUen_US
dc.typeinfo:eu-repo/semantics/articleen_US
dc.typeArticleen_US
dc.identifier.doi10.1109/LRA.2019.2895264en_US
dc.identifier.scopus2-s2.0-85063310630-
dc.contributor.orcid0000-0002-9910-6598-
dc.contributor.orcid0000-0003-2876-3301-
dc.contributor.orcid0000-0003-2940-0879-
dc.identifier.issue2-
dc.relation.volume4en_US
dc.investigacionIngeniería y Arquitecturaen_US
dc.type2Artículoen_US
dc.description.numberofpages8en_US
dc.utils.revisionen_US
dc.date.coverdateApril 2019en_US
dc.identifier.ulpgcen_US
dc.contributor.buulpgcBU-INFen_US
dc.description.sjr1,555
dc.description.jcr3,608
dc.description.sjrqQ1
dc.description.jcrqQ1
dc.description.esciESCI
item.fulltextCon texto completo-
item.grantfulltextopen-
crisitem.author.deptGIR SIANI: Inteligencia Artificial, Redes Neuronales, Aprendizaje Automático e Ingeniería de Datos-
crisitem.author.deptIU Sistemas Inteligentes y Aplicaciones Numéricas-
crisitem.author.deptDepartamento de Informática y Sistemas-
crisitem.author.orcid0000-0003-2876-3301-
crisitem.author.parentorgIU Sistemas Inteligentes y Aplicaciones Numéricas-
crisitem.author.fullNamePeñate Sánchez, Adrián-
Appears in Collections:Artículos
Adobe PDF (1,24 MB)
Show simple item record

Google ScholarTM

Check

Altmetric


Share



Export metadata



Items in accedaCRIS are protected by copyright, with all rights reserved, unless otherwise indicated.