Please use this identifier to cite or link to this item:
http://hdl.handle.net/10553/128887
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Reyes, Daniel | en_US |
dc.contributor.author | Sánchez Pérez, Javier | en_US |
dc.date.accessioned | 2024-02-12T10:57:31Z | - |
dc.date.available | 2024-02-12T10:57:31Z | - |
dc.date.issued | 2024 | en_US |
dc.identifier.issn | 2405-8440 | en_US |
dc.identifier.other | Scopus | - |
dc.identifier.uri | http://hdl.handle.net/10553/128887 | - |
dc.description.abstract | Brain tumors are a diverse group of neoplasms that are challenging to detect and classify due to their varying characteristics. Deep learning techniques have proven to be effective in tumor classification. However, there is a lack of studies that compare these techniques using a common methodology. This work aims to analyze the performance of convolutional neural networks in the classification of brain tumors. We propose a network consisting of a few convolutional layers, batch normalization, and max-pooling. Then, we explore recent deep architectures, such as VGG, ResNet, EfficientNet, or ConvNeXt. The study relies on two magnetic resonance imaging datasets with over 3000 images of three types of tumors –gliomas, meningiomas, and pituitary tumors–, as well as images without tumors. We determine the optimal hyperparameters of the networks using the training and validation sets. The training and test sets are used to assess the performance of the models from different perspectives, including training from scratch, data augmentation, transfer learning, and fine-tuning. The experiments are performed using the TensorFlow and Keras libraries in Python. We compare the accuracy of the models and analyze their complexity based on the capacity of the networks, their training times, and image throughput. Several networks achieve high accuracy rates on both datasets, with the best model achieving 98.7% accuracy, which is on par with state-of-the-art methods. The average precision for each type of tumor is 94.3% for gliomas, 93.8% for meningiomas, 97.9% for pituitary tumors, and 95.3% for images without tumors. VGG is the largest model with over 171 million parameters, whereas MobileNet and EfficientNetB0 are the smallest ones with 3.2 and 5.9 million parameters, respectively. These two neural networks are also the fastest to train with 23.7 and 25.4 seconds per epoch, respectively. On the other hand, ConvNext is the slowest model with 58.2 seconds per epoch. Our custom model obtained the highest image throughput with 234.37 images per second, followed by MobileNet with 226 images per second. ConvNext obtained the smallest throughput with 97.35 images per second. ResNet, MobileNet, and EfficientNet are the most accurate networks, with MobileNet and EfficientNet demonstrating superior performance in terms of complexity. Most models achieve the best accuracy using transfer learning followed by a fine-tuning step. However, data augmentation does not contribute to increasing the accuracy of the models in general. | en_US |
dc.language | eng | en_US |
dc.relation | F2022/03 | en_US |
dc.relation.ispartof | Heliyon | en_US |
dc.source | Heliyon [ISSN 2405-8440], v. 10, 3 (Febrero de 2024) | en_US |
dc.subject | 221118 Resonancia magnética | en_US |
dc.subject.other | Brain tumor classification | en_US |
dc.subject.other | Magnetic resonance imaging | en_US |
dc.subject.other | Deep learning | en_US |
dc.subject.other | Convolutional neural network | en_US |
dc.subject.other | Transfer learning | en_US |
dc.subject.other | Data augmentation | en_US |
dc.title | Performance of convolutional neural networks for the classification of brain tumors using magnetic resonance imaging | en_US |
dc.type | info:eu-repo/semantics/Article | en_US |
dc.type | Article | en_US |
dc.identifier.doi | 10.1016/j.heliyon.2024.e25468 | en_US |
dc.identifier.scopus | 85184023124 | - |
dc.identifier.isi | 001181656400001 | - |
dc.contributor.orcid | 0009-0000-1891-0510 | - |
dc.contributor.orcid | 0000-0001-8514-4350 | - |
dc.contributor.authorscopusid | 58668878800 | - |
dc.contributor.authorscopusid | 22735426600 | - |
dc.identifier.eissn | 2405-8440 | - |
dc.identifier.issue | 3 | - |
dc.relation.volume | 10 | en_US |
dc.investigacion | Ciencias de la Salud | en_US |
dc.investigacion | Ingeniería y Arquitectura | en_US |
dc.type2 | Artículo | en_US |
dc.contributor.daisngid | 27742816 | - |
dc.contributor.daisngid | 1176106 | - |
dc.description.numberofpages | 22 | en_US |
dc.utils.revision | Sí | en_US |
dc.contributor.wosstandard | WOS:Reyes, D | - |
dc.contributor.wosstandard | WOS:Sánchez, J | - |
dc.date.coverdate | Febrero 2024 | en_US |
dc.identifier.ulpgc | Sí | en_US |
dc.contributor.buulpgc | BU-INF | en_US |
dc.description.sjr | 0,617 | - |
dc.description.jcr | 4,0 | - |
dc.description.sjrq | Q1 | - |
dc.description.jcrq | Q2 | - |
dc.description.esci | ESCI | - |
dc.description.miaricds | 10,3 | - |
item.grantfulltext | open | - |
item.fulltext | Con texto completo | - |
crisitem.author.dept | GIR IUCES: Centro de Tecnologías de la Imagen | - |
crisitem.author.dept | IU de Cibernética, Empresa y Sociedad (IUCES) | - |
crisitem.author.dept | Departamento de Informática y Sistemas | - |
crisitem.author.orcid | 0000-0001-8514-4350 | - |
crisitem.author.parentorg | IU de Cibernética, Empresa y Sociedad (IUCES) | - |
crisitem.author.fullName | Sánchez Pérez, Javier | - |
Appears in Collections: | Artículos |
SCOPUSTM
Citations
10
checked on Nov 24, 2024
WEB OF SCIENCETM
Citations
7
checked on Nov 24, 2024
Page view(s)
214
checked on Oct 5, 2024
Download(s)
56
checked on Oct 5, 2024
Google ScholarTM
Check
Altmetric
Share
Export metadata
Items in accedaCRIS are protected by copyright, with all rights reserved, unless otherwise indicated.