Identificador persistente para citar o vincular este elemento:
http://hdl.handle.net/10553/42120
Título: | A computationally efficient algorithm for fusing multispectral and hyperspectral images | Autores/as: | Guerra, Raul Lopez, Sebastian Sarmiento, Roberto |
Clasificación UNESCO: | 221007 Espectroscopia electrónica 220990 Tratamiento digital. Imágenes |
Palabras clave: | Data fusion Gram-Schmidt orthogonalization Hyperspectral Multispectral Orthogonal projections, et al. |
Fecha de publicación: | 2016 | Publicación seriada: | IEEE Transactions on Geoscience and Remote Sensing | Resumen: | Remote sensing systems equipped with multispectral and hyperspectral sensors are able to capture images of the surface of the Earth at different wavelengths. In these systems, hyperspectral sensors typically provide images with a high spectral resolution but a reduced spatial resolution, while on the contrary, multispectral sensors are able to produce images with a rich spatial resolution but a poor spectral resolution. Due to this reason, different fusion algorithms have been proposed during the last years in order to obtain remotely sensed images with enriched spatial and spectral resolutions by wisely combining the data acquired for the same scene by multispectral and hyperspectral sensors. However, the algorithms so far proposed that are able to obtain fused images with a good spatial and spectral quality require a formidable amount of computationally complex operations that cannot be executed in parallel, which clearly prevent the utilization of these algorithms in applications under real-time constraints in which high-performance parallel-based computing systems are normally required for accelerating the overall process. On the other hand, there are other state-of-the-art algorithms that are capable of fusing these images with a lower computational effort but at the cost of decreasing the quality of the resultant fused image. In this paper, a new algorithm named computationally efficient algorithm for fusing multispectral and hyperspectral images (CoEf-MHI) is proposed in order to obtain a high-quality image from hyperspectral and multispectral images of the same scene with a low computational effort. The proposed CoEf-MHI algorithm is based on incorporating the spatial details of the multispectral image into the hyperspectral image, without introducing spectral distortions. To achieve this goal, the CoEf-MHI algorithm first spatially upsamples, by means of a bilinear interpolation, the input hyperspectral image to the spatial resolution of the input multispectral image, and then, it independently refines each pixel of the resulting image by linearly combining the multispectral and hyperspectral pixels in its neighborhood. The simulations performed in this work with different images demonstrate that our proposal is much more efficient than state-of-the-art approaches, being this efficiency understood as the ratio between the quality of the fused image and the computational effort required to obtain such image. | URI: | http://hdl.handle.net/10553/42120 | ISSN: | 0196-2892 | DOI: | 10.1109/TGRS.2016.2570433 | Fuente: | IEEE Transactions on Geoscience and Remote Sensing[ISSN 0196-2892],v. 54 (7485828), p. 5712-5728 |
Colección: | Artículos |
Los elementos en ULPGC accedaCRIS están protegidos por derechos de autor con todos los derechos reservados, a menos que se indique lo contrario.