Please use this identifier to cite or link to this item: http://hdl.handle.net/10553/114747
DC FieldValueLanguage
dc.contributor.authorOrtega Zamorano, Franciscoen_US
dc.contributor.authorJerez, José Mªen_US
dc.contributor.authorGómez, Ivánen_US
dc.contributor.authorFranco, Leonardoen_US
dc.date.accessioned2022-05-16T19:17:41Z-
dc.date.available2022-05-16T19:17:41Z-
dc.date.issued2017en_US
dc.identifier.issn1069-2509en_US
dc.identifier.urihttp://hdl.handle.net/10553/114747-
dc.description.abstractTraining of large scale neural networks, like those used nowadays in Deep Learning schemes, requires long computational times or the use of high performance computation solutions like those based on cluster computation, GPU boards, etc. As a possible alternative, in this work the Back-Propagation learning algorithm is implemented in an FPGA board using a multiplexing layer scheme, in which a single layer of neurons is physically implemented in parallel but can be reused any number of times in order to simulate multi-layer architectures. An on-chip implementation of the algorithm is carried out using a training/validation scheme in order to avoid overfitting effects. The hardware implementation is tested on several configurations, permitting to simulate architectures comprising up to 127 hidden layers with a maximum number of neurons in each layer of 60 neurons. We confirmed the correct implementation of the algorithm and compared the computational times against C and Matlab code executed in a multicore supercomputer, observing a clear advantage of the proposed FPGA scheme. The layer multiplexing scheme used provides a simple and flexible approach in comparison to standard implementations of the Back-Propagation algorithm representing an important step towards the FPGA implementation of deep neural networks, one of the most novel and successful existing models for prediction problems.en_US
dc.languageengen_US
dc.relation5770en_US
dc.relation.ispartofIntegrated Computer-Aided Engineeringen_US
dc.sourceIntegrated Computer-Aided Engineering [ISSN 1069-2509], vol. 24, n. 2, p. 171-185en_US
dc.subject1203 Ciencia de los ordenadoresen_US
dc.subject.otherHardware implementationen_US
dc.subject.otherFPGAen_US
dc.subject.otherSupervised learningen_US
dc.subject.otherDeep neural networksen_US
dc.subject.otherLayer multiplexingen_US
dc.titleLayer multiplexing FPGA implementation for deep back-propagation learningen_US
dc.typeinfo:eu-repo/semantics/Articleen_US
dc.identifier.doi10.3233/ICA-170538en_US
dc.identifier.scopus2-s2.0-85015707456-
dc.identifier.isiWOS:000397888000006-
dc.contributor.orcid#NODATA#-
dc.contributor.orcid#NODATA#-
dc.contributor.orcid#NODATA#-
dc.contributor.orcid#NODATA#-
dc.description.lastpage185en_US
dc.identifier.issue2-
dc.description.firstpage171en_US
dc.relation.volume24en_US
dc.investigacionIngeniería y Arquitecturaen_US
dc.type2Artículoen_US
dc.utils.revisionen_US
dc.identifier.ulpgcNoen_US
dc.contributor.buulpgcBU-INFen_US
dc.description.sjr0,665
dc.description.jcr3,667
dc.description.sjrqQ1
dc.description.jcrqQ1
dc.description.scieSCIE
item.grantfulltextopen-
item.fulltextCon texto completo-
crisitem.author.deptGIR SIANI: Inteligencia Artificial, Robótica y Oceanografía Computacional-
crisitem.author.deptIU Sistemas Inteligentes y Aplicaciones Numéricas-
crisitem.author.orcid0000-0002-4397-2905-
crisitem.author.parentorgIU Sistemas Inteligentes y Aplicaciones Numéricas-
crisitem.author.fullNameOrtega Zamorano,Francisco-
Appears in Collections:Artículos
Adobe PDF (908,87 kB)
Show simple item record

Google ScholarTM

Check

Altmetric


Share



Export metadata



Items in accedaCRIS are protected by copyright, with all rights reserved, unless otherwise indicated.