Please use this identifier to cite or link to this item:
http://hdl.handle.net/10553/117928
Title: | Learning depth-aware deep representations for robotic perception | Authors: | Porzi, Lorenzo Rota Buló, Samuel Penate-Sanchez, Adrian Ricci, Elisa Moreno-Noguer, Frances |
Keywords: | RGB-D perception Visual learning |
Issue Date: | 2017 | Journal: | IEEE Robotics and Automation Letters | Abstract: | Exploiting RGB-D data by means of convolutional neural networks (CNNs) is at the core of a number of robotics applications, including object detection, scene semantic segmentation, and grasping. Most existing approaches, however, exploit RGB-D data by simply considering depth as an additional input channel for the network. In this paper we show that the performance of deep architectures can be boosted by introducing DaConv, a novel, general-purpose CNN block which exploits depth to learn scale-aware feature representations. We demonstrate the benefits of DaConv on a variety of robotics oriented tasks, involving affordance detection, object coordinate regression, and contour detection in RGB-D images. In each of these experiments we show the potential of the proposed block and how it can be readily integrated into existing CNN architectures. | URI: | http://hdl.handle.net/10553/117928 | ISSN: | 2377-3766 | DOI: | 10.1109/LRA.2016.2637444 | Source: | IEEE Robotics and Automation Letters, v. 2 (2), p. 468 - 475 (1997) |
Appears in Collections: | Artículos |
SCOPUSTM
Citations
24
checked on Nov 24, 2024
WEB OF SCIENCETM
Citations
18
checked on Nov 24, 2024
Page view(s)
47
checked on Jun 15, 2024
Download(s)
29
checked on Jun 15, 2024
Google ScholarTM
Check
Altmetric
Share
Export metadata
Items in accedaCRIS are protected by copyright, with all rights reserved, unless otherwise indicated.