Identificador persistente para citar o vincular este elemento: http://hdl.handle.net/10553/128884
Campo DC Valoridioma
dc.contributor.authorVasquez Salazar, Ruben Darioen_US
dc.contributor.authorCardona Mesa, Ahmed Alejandroen_US
dc.contributor.authorGómez Déniz, Luisen_US
dc.contributor.authorTravieso González, Carlos Manuelen_US
dc.date.accessioned2024-02-12T09:19:01Z-
dc.date.available2024-02-12T09:19:01Z-
dc.date.issued2024en_US
dc.identifier.issn1545-598Xen_US
dc.identifier.otherScopus-
dc.identifier.urihttp://hdl.handle.net/10553/128884-
dc.description.abstractDeep Learning methods require immense amounts of labeled data to provide reasonable results. In computer vision applications, and more specifically in despeckling SAR (Synthetic Aperture Radar) images, due to the speckle content, there is no ground truth available. To test the performances of despeckling filters, the common protocol is to synthetically corrupt optical images with a suitable speckle model and then, after filtering, well-known metrics are obtained. Then, filters are tested on actual SAR data. However, even the most elaborated speckle models are far from accounting for the complex mechanisms related to SAR images. In this paper, a methodology to design a realistic dataset is proposed. Actual SAR images of the same scene acquired with the same sensor on different dates, then they are properly co-registered and averaged to get a ground truth-like reference image to objectively evaluate the performance of a despeckling method. To show the benefits of the proposed methodology, a deep learning approach is used to filter the data by using the designed dataset, which will be called the “SAR model”. Then they are compared with the standard protocol by using synthetically corrupted optical images, which will be the “Synthetic model”. One last validation is performed by filtering the same images with FANS, a well-known despeckling filter and compared with the results obtained with autoencoder. The validation on actual SAR data not included in the training phase validates the proposed methodology. From the results shown, it is recommended to test filters on the proposed more realistic dataset.en_US
dc.languagespaen_US
dc.relation.ispartofIEEE Geoscience and Remote Sensing Lettersen_US
dc.sourceIEEE Geoscience and Remote Sensing Letters [ISSN 1545-598X], (Enero 2024)en_US
dc.subject220920 Radiometríaen_US
dc.subject.otherDeep Learning (Dl)en_US
dc.subject.otherMultitemporal Fusionen_US
dc.subject.otherNoise Measurementen_US
dc.subject.otherOptical Filtersen_US
dc.subject.otherProtocolsen_US
dc.subject.otherRadar Polarimetryen_US
dc.subject.otherSar Dataen_US
dc.subject.otherSpeckleen_US
dc.subject.otherSpeckle Noiseen_US
dc.subject.otherSupervised Learningen_US
dc.subject.otherSynthetic Aperture Radaren_US
dc.subject.otherTrainingen_US
dc.titleA new methodology for assessing SAR despeckling filtersen_US
dc.typeinfo:eu-repo/semantics/Articleen_US
dc.typeArticleen_US
dc.identifier.doi10.1109/LGRS.2024.3357211en_US
dc.identifier.scopus85183665645-
dc.contributor.orcidNO DATA-
dc.contributor.orcidNO DATA-
dc.contributor.orcidNO DATA-
dc.contributor.orcidNO DATA-
dc.contributor.authorscopusid58544220200-
dc.contributor.authorscopusid58544725400-
dc.contributor.authorscopusid56789548300-
dc.contributor.authorscopusid57219115631-
dc.identifier.eissn1558-0571-
dc.investigacionIngeniería y Arquitecturaen_US
dc.type2Artículoen_US
dc.description.numberofpages5en_US
dc.utils.revisionen_US
dc.date.coverdateEnero 2024en_US
dc.identifier.ulpgcen_US
dc.contributor.buulpgcBU-TELen_US
dc.description.sjr1,248
dc.description.jcr4,8
dc.description.sjrqQ1
dc.description.jcrqQ1
dc.description.scieSCIE
dc.description.miaricds10,7
item.grantfulltextopen-
item.fulltextCon texto completo-
crisitem.author.deptGIR IUCES: Centro de Tecnologías de la Imagen-
crisitem.author.deptIU de Cibernética, Empresa y Sociedad (IUCES)-
crisitem.author.deptGIR IUCES: Centro de Tecnologías de la Imagen-
crisitem.author.deptIU de Cibernética, Empresa y Sociedad (IUCES)-
crisitem.author.deptGIR IUCES: Centro de Tecnologías de la Imagen-
crisitem.author.deptIU de Cibernética, Empresa y Sociedad (IUCES)-
crisitem.author.deptDepartamento de Ingeniería Electrónica y Automática-
crisitem.author.deptGIR IDeTIC: División de Procesado Digital de Señales-
crisitem.author.deptIU para el Desarrollo Tecnológico y la Innovación-
crisitem.author.deptDepartamento de Señales y Comunicaciones-
crisitem.author.orcid0000-0003-0667-2302-
crisitem.author.orcid0000-0002-4621-2768-
crisitem.author.parentorgIU de Cibernética, Empresa y Sociedad (IUCES)-
crisitem.author.parentorgIU de Cibernética, Empresa y Sociedad (IUCES)-
crisitem.author.parentorgIU de Cibernética, Empresa y Sociedad (IUCES)-
crisitem.author.parentorgIU para el Desarrollo Tecnológico y la Innovación-
crisitem.author.fullNameVasquez Salazar, Ruben Dario-
crisitem.author.fullNameCardona Mesa, Ahmed Alejandro-
crisitem.author.fullNameGómez Déniz, Luis-
crisitem.author.fullNameTravieso González, Carlos Manuel-
Colección:Artículos
Adobe PDF (7,45 MB)
Vista resumida

Citas SCOPUSTM   

3
actualizado el 15-dic-2024

Citas de WEB OF SCIENCETM
Citations

2
actualizado el 15-dic-2024

Visitas

62
actualizado el 06-jul-2024

Descargas

115
actualizado el 06-jul-2024

Google ScholarTM

Verifica

Altmetric


Comparte



Exporta metadatos



Los elementos en ULPGC accedaCRIS están protegidos por derechos de autor con todos los derechos reservados, a menos que se indique lo contrario.