Please use this identifier to cite or link to this item: https://accedacris.ulpgc.es/handle/10553/139745
Title: Transfer Learning from Simulated to Real Scenes for Monocular 3D Object Detection
Authors: Mohamed, Sondos
Zimmer, Walter
Greer, Ross
Ghita, Ahmed Alaaeldin
Castrillón-Santana, Modesto 
Trivedi, Mohan
Knoll, Alois
Carta, Salvatore Mario
Marras, Mirko
UNESCO Clasification: 33 Ciencias tecnológicas
Keywords: Intelligent Transportation Systems
Intelligent Vehicles
Monocular 3D Object Detection
Synthetic Data
Transfer Learning
Issue Date: 2025
Journal: Lecture Notes in Computer Science 
Conference: Workshops that were held in conjunction with the 18th European Conference on Computer Vision, ECCV 2024 
Abstract: Accurately detecting 3D objects from monocular images in dynamic roadside scenarios remains a challenging problem due to varying camera perspectives and unpredictable scene conditions. This paper introduces a two-stage training strategy to address these challenges. Our approach initially trains a model on the large-scale synthetic dataset, RoadSense3D, which offers a diverse range of scenarios for robust feature learning. Subsequently, we fine-tune the model on a combination of real-world datasets to enhance its adaptability to practical conditions. Experimental results of the Cube R-CNN model on challenging public benchmarks show a remarkable improvement in detection performance, with a mean average precision rising from 0.26 to 12.76 on the TUM Traffic A9 Highway dataset and from 2.09 to 6.60 on the DAIR-V2X-I dataset, when performing transfer learning. Code, data, and qualitative video results are available at https://roadsense3d.github.io.
URI: https://accedacris.ulpgc.es/handle/10553/139745
ISBN: 9783031918124
ISSN: 0302-9743
DOI: 10.1007/978-3-031-91813-1_20
Source: Lecture Notes in Computer Science[ISSN 0302-9743],v. 15630 LNCS, p. 309-325, (Enero 2025)
Appears in Collections:Actas de congresos
Show full item record

Google ScholarTM

Check

Altmetric


Share



Export metadata



Items in accedaCRIS are protected by copyright, with all rights reserved, unless otherwise indicated.