Identificador persistente para citar o vincular este elemento:
http://hdl.handle.net/10553/121364
Título: | SEG-ESRGAN: A Multi-Task Network for Super-Resolution and Semantic Segmentation of Remote Sensing Images | Autores/as: | Salgueiro, Luis Marcello Ruiz, Francisco Javier Vilaplana, Verónica |
Palabras clave: | Multi-task network Super-resolution Semantic segmentation Sentinel-2 WorldView-2 |
Fecha de publicación: | 2022 | Proyectos: | Procesado Avanzado de Datos de Teledetección Para la Monitorizacióny Gestión Sostenible de Recursos Marinosy Terrestres en Ecosistemas Vulnerables. | Publicación seriada: | Remote Sensing | Resumen: | The production of highly accurate land cover maps is one of the primary challenges in remote sensing, which depends on the spatial resolution of the input images. Sometimes, high-resolution imagery is not available or is too expensive to cover large areas or to perform multitemporal analysis. In this context, we propose a multi-task network to take advantage of the freely available Sentinel-2 imagery to produce a super-resolution image, with a scaling factor of 5, and the corresponding high-resolution land cover map. Our proposal, named SEG-ESRGAN, consists of two branches: the super-resolution branch, that produces Sentinel-2 multispectral images at 2 m resolution, and an encoder–decoder architecture for the semantic segmentation branch, that generates the enhanced land cover map. From the super-resolution branch, several skip connections are retrieved and concatenated with features from the different stages of the encoder part of the segmentation branch, promoting the flow of meaningful information to boost the accuracy in the segmentation task. Our model is trained with a multi-loss approach using a novel dataset to train and test the super-resolution stage, which is developed from Sentinel-2 and WorldView-2 image pairs. In addition, we generated a dataset with ground-truth labels for the segmentation task. To assess the super-resolution improvement, the PSNR, SSIM, ERGAS, and SAM metrics were considered, while to measure the classification performance, we used the IoU, confusion matrix and the F1-score. Experimental results demonstrate that the SEG-ESRGAN model outperforms different full segmentation and dual network models (U-Net, DeepLabV3+, HRNet and Dual_DeepLab), allowing the generation of high-resolution land cover maps in challenging scenarios using Sentinel-2 10 m bands. | URI: | http://hdl.handle.net/10553/121364 | ISSN: | 2072-4292 | DOI: | 10.3390/rs14225862 | Fuente: | Remote Sensing [ISSN 2072-4292], v. 14 (22), 5862, (Noviembre 2022) |
Colección: | Artículos |
Citas SCOPUSTM
9
actualizado el 17-nov-2024
Citas de WEB OF SCIENCETM
Citations
8
actualizado el 17-nov-2024
Visitas
56
actualizado el 16-jun-2024
Descargas
21
actualizado el 16-jun-2024
Google ScholarTM
Verifica
Altmetric
Comparte
Exporta metadatos
Los elementos en ULPGC accedaCRIS están protegidos por derechos de autor con todos los derechos reservados, a menos que se indique lo contrario.