Please use this identifier to cite or link to this item:
http://hdl.handle.net/10553/35376
Title: | SVM-based real-time hyperspectral image classifier on a manycore architecture | Authors: | Madroñal, D. Lazcano, R. Salvador, R. Fabelo, H. Ortega, S. Callico, G. M. Juarez, E. Sanz, C. |
UNESCO Clasification: | 33 Ciencias tecnológicas | Keywords: | Support Vector Machine Hyperspectral imaging Massively parallel processing Real-time processing Energy consumption awareness, et al |
Issue Date: | 2017 | Journal: | Journal of Systems Architecture | Conference: | Conference on Design and Architectures for Signal and Image Processing (DASIP) | Abstract: | This paper presents a study of the design space of a Support Vector Machine (SVM) classifier with a linear kernel running on a manycore MPPA (Massively Parallel Processor Array) platform. This architecture gathers 256 cores distributed in 16 clusters working in parallel. This study aims at implementing a real-time hyperspectral SVM classifier, where real-time is defined as the time required to capture a hyperspectral image. To do so, two aspects of the SVM classifier have been analyzed: the classification algorithm and the system parallelization. On the one hand, concerning the classification algorithm, first, the classification model has been optimized to fit into the MPPA structure and, secondly, a probability estimation stage has been included to refine the classification results. On the other hand, the system parallelization has been divided into two levels: first, the parallelism of the classification has been exploited taking advantage of the pixel-wise classification methodology supported by the SVM algorithm and, secondly, a double-buffer communication procedure has been implemented to parallelize the image transmission and the cluster classification stages. Experimenting with medical images, an average speedup of 9 has been obtained using a single-cluster and double-buffer implementation with 16 cores working in parallel. As a result, a system whose processing time linearly grows with the number of pixels composing the scene has been implemented. Specifically, only 3 mu s are required to process each pixel within the captured scene independently from the spatial resolution of the image. | URI: | http://hdl.handle.net/10553/35376 | ISSN: | 1383-7621 | DOI: | 10.1016/j.sysarc.2017.08.002 | Source: | Journal of Systems Architecture[ISSN 1383-7621],v. 80, p. 30-40 |
Appears in Collections: | Artículos |
SCOPUSTM
Citations
28
checked on Dec 1, 2024
WEB OF SCIENCETM
Citations
23
checked on Nov 24, 2024
Page view(s)
50
checked on Mar 16, 2024
Google ScholarTM
Check
Altmetric
Share
Export metadata
Items in accedaCRIS are protected by copyright, with all rights reserved, unless otherwise indicated.