Please use this identifier to cite or link to this item: http://hdl.handle.net/10553/113964
DC FieldValueLanguage
dc.contributor.authorSalgueiro, Luisen_US
dc.contributor.authorMarcello Ruiz, Francisco Javieren_US
dc.contributor.authorVilaplana, Verónicaen_US
dc.date.accessioned2022-03-08T08:24:20Z-
dc.date.available2022-03-08T08:24:20Z-
dc.date.issued2021en_US
dc.identifier.issn2072-4292en_US
dc.identifier.urihttp://hdl.handle.net/10553/113964-
dc.description.abstractSentinel-2 satellites have become one of the main resources for Earth observation images because they are free of charge, have a great spatial coverage and high temporal revisit. Sentinel-2 senses the same location providing different spatial resolutions as well as generating a multi-spectral image with 13 bands of 10, 20, and 60 m/pixel. In this work, we propose a single-image super-resolution model based on convolutional neural networks that enhances the low-resolution bands (20 m and 60 m) to reach the maximal resolution sensed (10 m) at the same time, whereas other approaches provide two independent models for each group of LR bands. Our proposed model, named Sen2-RDSR, is made up of Residual in Residual blocks that produce two final outputs at maximal resolution, one for 20 m/pixel bands and the other for 60 m/pixel bands. The training is done in two stages, first focusing on 20 m bands and then on the 60 m bands. Experimental results using six quality metrics (RMSE, SRE, SAM, PSNR, SSIM, ERGAS) show that our model has superior performance compared to other state-of-the-art approaches, and it is very effective and suitable as a preliminary step for land and coastal applications, as studies involving pixel-based classification for Land-Use-Land-Cover or the generation of vegetation indices.en_US
dc.languageengen_US
dc.relationProcesado Avanzado de Datos de Teledetección Para la Monitorización y Gestión Sostenible de Recursos Marinos y Terrestres en Ecosistemas Vulnerables.en_US
dc.relation.ispartofRemote Sensingen_US
dc.sourceRemote Sensing [ISSN 2072-4292], v. 13(24), 5007, (Diciembre 2021)en_US
dc.subject250407 Geodesia por satélitesen_US
dc.subject332401 Satélites artificialesen_US
dc.subject.otherSentinel-2en_US
dc.subject.otherSuper-resolutionen_US
dc.subject.otherConvolutional neural networken_US
dc.subject.otherDeep learningen_US
dc.titleSingle-Image Super-Resolution of Sentinel-2 Low Resolution Bands with Residual Dense Convolutional Neural Networksen_US
dc.typeinfo:eu-repo/semantics/articleen_US
dc.typeArticleen_US
dc.identifier.doi10.3390/rs13245007en_US
dc.identifier.scopus2-s2.0-85121470362-
dc.identifier.isiWOS:000737252600001-
dc.contributor.orcid#NODATA#-
dc.contributor.orcid#NODATA#-
dc.contributor.orcid#NODATA#-
dc.identifier.issue24-
dc.relation.volume13(24)en_US
dc.investigacionIngeniería y Arquitecturaen_US
dc.type2Artículoen_US
dc.description.notasThis article belongs to the Special Issue Advanced Super-Resolution Methods in Remote Sensingen_US
dc.description.numberofpages20en_US
dc.utils.revisionen_US
dc.identifier.ulpgcen_US
dc.contributor.buulpgcBU-INGen_US
dc.description.sjr1,283
dc.description.jcr5,349
dc.description.sjrqQ1
dc.description.jcrqQ1
dc.description.scieSCIE
dc.description.miaricds10,6
item.grantfulltextopen-
item.fulltextCon texto completo-
crisitem.project.principalinvestigatorMarcello Ruiz, Francisco Javier-
crisitem.author.deptGIR IOCAG: Procesado de Imágenes y Teledetección-
crisitem.author.deptIU de Oceanografía y Cambio Global-
crisitem.author.deptDepartamento de Señales y Comunicaciones-
crisitem.author.orcid0000-0002-9646-1017-
crisitem.author.parentorgIU de Oceanografía y Cambio Global-
crisitem.author.fullNameMarcello Ruiz, Francisco Javier-
Appears in Collections:Artículos
Adobe PDF (15,7 MB)
Show simple item record

SCOPUSTM   
Citations

11
checked on Nov 17, 2024

WEB OF SCIENCETM
Citations

9
checked on Nov 17, 2024

Page view(s)

113
checked on Oct 12, 2024

Download(s)

200
checked on Oct 12, 2024

Google ScholarTM

Check

Altmetric


Share



Export metadata



Items in accedaCRIS are protected by copyright, with all rights reserved, unless otherwise indicated.