Identificador persistente para citar o vincular este elemento: http://hdl.handle.net/10553/128887
Campo DC Valoridioma
dc.contributor.authorReyes, Danielen_US
dc.contributor.authorSánchez Pérez, Javieren_US
dc.date.accessioned2024-02-12T10:57:31Z-
dc.date.available2024-02-12T10:57:31Z-
dc.date.issued2024en_US
dc.identifier.issn2405-8440en_US
dc.identifier.otherScopus-
dc.identifier.urihttp://hdl.handle.net/10553/128887-
dc.description.abstractBrain tumors are a diverse group of neoplasms that are challenging to detect and classify due to their varying characteristics. Deep learning techniques have proven to be effective in tumor classification. However, there is a lack of studies that compare these techniques using a common methodology. This work aims to analyze the performance of convolutional neural networks in the classification of brain tumors. We propose a network consisting of a few convolutional layers, batch normalization, and max-pooling. Then, we explore recent deep architectures, such as VGG, ResNet, EfficientNet, or ConvNeXt. The study relies on two magnetic resonance imaging datasets with over 3000 images of three types of tumors –gliomas, meningiomas, and pituitary tumors–, as well as images without tumors. We determine the optimal hyperparameters of the networks using the training and validation sets. The training and test sets are used to assess the performance of the models from different perspectives, including training from scratch, data augmentation, transfer learning, and fine-tuning. The experiments are performed using the TensorFlow and Keras libraries in Python. We compare the accuracy of the models and analyze their complexity based on the capacity of the networks, their training times, and image throughput. Several networks achieve high accuracy rates on both datasets, with the best model achieving 98.7% accuracy, which is on par with state-of-the-art methods. The average precision for each type of tumor is 94.3% for gliomas, 93.8% for meningiomas, 97.9% for pituitary tumors, and 95.3% for images without tumors. VGG is the largest model with over 171 million parameters, whereas MobileNet and EfficientNetB0 are the smallest ones with 3.2 and 5.9 million parameters, respectively. These two neural networks are also the fastest to train with 23.7 and 25.4 seconds per epoch, respectively. On the other hand, ConvNext is the slowest model with 58.2 seconds per epoch. Our custom model obtained the highest image throughput with 234.37 images per second, followed by MobileNet with 226 images per second. ConvNext obtained the smallest throughput with 97.35 images per second. ResNet, MobileNet, and EfficientNet are the most accurate networks, with MobileNet and EfficientNet demonstrating superior performance in terms of complexity. Most models achieve the best accuracy using transfer learning followed by a fine-tuning step. However, data augmentation does not contribute to increasing the accuracy of the models in general.en_US
dc.languageengen_US
dc.relationF2022/03en_US
dc.relation.ispartofHeliyonen_US
dc.sourceHeliyon [ISSN 2405-8440], v. 10, 3 (Febrero de 2024)en_US
dc.subject221118 Resonancia magnéticaen_US
dc.subject.otherBrain tumor classificationen_US
dc.subject.otherMagnetic resonance imagingen_US
dc.subject.otherDeep learningen_US
dc.subject.otherConvolutional neural networken_US
dc.subject.otherTransfer learningen_US
dc.subject.otherData augmentationen_US
dc.titlePerformance of convolutional neural networks for the classification of brain tumors using magnetic resonance imagingen_US
dc.typeinfo:eu-repo/semantics/Articleen_US
dc.typeArticleen_US
dc.identifier.doi10.1016/j.heliyon.2024.e25468en_US
dc.identifier.scopus85184023124-
dc.identifier.isi001181656400001-
dc.contributor.orcid0009-0000-1891-0510-
dc.contributor.orcid0000-0001-8514-4350-
dc.contributor.authorscopusid58668878800-
dc.contributor.authorscopusid22735426600-
dc.identifier.eissn2405-8440-
dc.identifier.issue3-
dc.relation.volume10en_US
dc.investigacionCiencias de la Saluden_US
dc.investigacionIngeniería y Arquitecturaen_US
dc.type2Artículoen_US
dc.contributor.daisngid27742816-
dc.contributor.daisngid1176106-
dc.description.numberofpages22en_US
dc.utils.revisionen_US
dc.contributor.wosstandardWOS:Reyes, D-
dc.contributor.wosstandardWOS:Sánchez, J-
dc.date.coverdateFebrero 2024en_US
dc.identifier.ulpgcen_US
dc.contributor.buulpgcBU-INFen_US
dc.description.sjr0,617-
dc.description.jcr4,0-
dc.description.sjrqQ1-
dc.description.jcrqQ2-
dc.description.esciESCI-
dc.description.miaricds10,3-
item.fulltextCon texto completo-
item.grantfulltextopen-
crisitem.author.deptGIR IUCES: Centro de Tecnologías de la Imagen-
crisitem.author.deptIU de Cibernética, Empresa y Sociedad (IUCES)-
crisitem.author.deptDepartamento de Informática y Sistemas-
crisitem.author.orcid0000-0001-8514-4350-
crisitem.author.parentorgIU de Cibernética, Empresa y Sociedad (IUCES)-
crisitem.author.fullNameSánchez Pérez, Javier-
Colección:Artículos
Adobe PDF (2,31 MB)
Vista resumida

Citas SCOPUSTM   

6
actualizado el 06-oct-2024

Citas de WEB OF SCIENCETM
Citations

5
actualizado el 06-oct-2024

Visitas

214
actualizado el 05-oct-2024

Descargas

56
actualizado el 05-oct-2024

Google ScholarTM

Verifica

Altmetric


Comparte



Exporta metadatos



Los elementos en ULPGC accedaCRIS están protegidos por derechos de autor con todos los derechos reservados, a menos que se indique lo contrario.