Identificador persistente para citar o vincular este elemento: http://hdl.handle.net/10553/117928
Título: Learning depth-aware deep representations for robotic perception
Autores/as: Porzi, Lorenzo
Rota Buló, Samuel
Penate-Sanchez, Adrian 
Ricci, Elisa
Moreno-Noguer, Frances
Palabras clave: RGB-D perception
Visual learning
Fecha de publicación: 2017
Publicación seriada: IEEE Robotics and Automation Letters 
Resumen: Exploiting RGB-D data by means of convolutional neural networks (CNNs) is at the core of a number of robotics applications, including object detection, scene semantic segmentation, and grasping. Most existing approaches, however, exploit RGB-D data by simply considering depth as an additional input channel for the network. In this paper we show that the performance of deep architectures can be boosted by introducing DaConv, a novel, general-purpose CNN block which exploits depth to learn scale-aware feature representations. We demonstrate the benefits of DaConv on a variety of robotics oriented tasks, involving affordance detection, object coordinate regression, and contour detection in RGB-D images. In each of these experiments we show the potential of the proposed block and how it can be readily integrated into existing CNN architectures.
URI: http://hdl.handle.net/10553/117928
ISSN: 2377-3766
DOI: 10.1109/LRA.2016.2637444
Fuente: IEEE Robotics and Automation Letters, v. 2 (2), p. 468 - 475 (1997)
Colección:Artículos
Unknown (1,38 MB)
Vista completa

Citas SCOPUSTM   

24
actualizado el 24-nov-2024

Citas de WEB OF SCIENCETM
Citations

18
actualizado el 24-nov-2024

Visitas

47
actualizado el 15-jun-2024

Descargas

29
actualizado el 15-jun-2024

Google ScholarTM

Verifica

Altmetric


Comparte



Exporta metadatos



Los elementos en ULPGC accedaCRIS están protegidos por derechos de autor con todos los derechos reservados, a menos que se indique lo contrario.