Identificador persistente para citar o vincular este elemento:
http://hdl.handle.net/10553/129714
Campo DC | Valor | idioma |
---|---|---|
dc.contributor.author | Gupta, Rinki | en_US |
dc.contributor.author | Singh, Rashmi | en_US |
dc.contributor.author | Travieso-González, Carlos M. | en_US |
dc.contributor.author | Burget, Radim | en_US |
dc.contributor.author | Kishore Dutta, Malay | en_US |
dc.date.accessioned | 2024-04-03T07:34:18Z | - |
dc.date.available | 2024-04-03T07:34:18Z | - |
dc.date.issued | 2024 | en_US |
dc.identifier.issn | 1746-8094 | en_US |
dc.identifier.other | Scopus | - |
dc.identifier.uri | http://hdl.handle.net/10553/129714 | - |
dc.description.abstract | Respiratory sounds convey significant information about the pulmonary status. This study proposes a deep learning-based framework to create an automatic, non-invasive, diagnostic method of categorizing pulmonary sounds. A labelled database of pulmonary sounds has been collected using an electronic stethoscope and audio recording instrument. Two deep learning architectures, 1D DeepRespNet and 2D DeepRespNet are proposed in this work that were trained and evaluated with normalised 1-D time series and 2-D spectrograms of acoustic signals of six types of lung sounds, respectively. The models were highly optimized to yield superior performance on the considered dataset. Experimental results demonstrate that the 2D DeepRespNet model trained with spectrogram-based representations yields higher accuracy of 95.2% on the test data as compared to the 1D DeepRespNet trained on the time-series data. The proposed model may be deployed on a single board computer or integrated into a smartphone to develop a standalone diagnostic tool to accurately and objectively classify abnormal lung sounds with low time complexity. | en_US |
dc.language | eng | en_US |
dc.relation.ispartof | Biomedical Signal Processing and Control | en_US |
dc.source | Biomedical Signal Processing and Control[ISSN 1746-8094],v. 93, (Julio 2024) | en_US |
dc.subject | 3307 Tecnología electrónica | en_US |
dc.subject.other | Convolutional Neural Network | en_US |
dc.subject.other | Multi-Class Classification | en_US |
dc.subject.other | Respiratory Disease | en_US |
dc.subject.other | Spectrogram | en_US |
dc.title | DeepRespNet: A deep neural network for classification of respiratory sounds | en_US |
dc.type | info:eu-repo/semantics/Article | en_US |
dc.type | Article | en_US |
dc.identifier.doi | 10.1016/j.bspc.2024.106191 | en_US |
dc.identifier.scopus | 85187204608 | - |
dc.contributor.orcid | NO DATA | - |
dc.contributor.orcid | NO DATA | - |
dc.contributor.orcid | NO DATA | - |
dc.contributor.orcid | 0000-0003-1849-5390 | - |
dc.contributor.orcid | NO DATA | - |
dc.contributor.authorscopusid | 55488127500 | - |
dc.contributor.authorscopusid | 57226730965 | - |
dc.contributor.authorscopusid | 57219115631 | - |
dc.contributor.authorscopusid | 23011250200 | - |
dc.contributor.authorscopusid | 58927547900 | - |
dc.identifier.eissn | 1746-8108 | - |
dc.relation.volume | 93 | en_US |
dc.investigacion | Ingeniería y Arquitectura | en_US |
dc.type2 | Artículo | en_US |
dc.utils.revision | Sí | en_US |
dc.date.coverdate | Julio 2024 | en_US |
dc.identifier.ulpgc | Sí | en_US |
dc.contributor.buulpgc | BU-TEL | en_US |
dc.description.sjr | 1,284 | |
dc.description.jcr | 5,1 | |
dc.description.sjrq | Q1 | |
dc.description.jcrq | Q2 | |
item.grantfulltext | none | - |
item.fulltext | Sin texto completo | - |
crisitem.author.dept | GIR IDeTIC: División de Procesado Digital de Señales | - |
crisitem.author.dept | IU para el Desarrollo Tecnológico y la Innovación | - |
crisitem.author.dept | Departamento de Señales y Comunicaciones | - |
crisitem.author.orcid | 0000-0002-4621-2768 | - |
crisitem.author.parentorg | IU para el Desarrollo Tecnológico y la Innovación | - |
crisitem.author.fullName | Travieso González, Carlos Manuel | - |
Colección: | Artículos |
Visitas
69
actualizado el 07-sep-2024
Google ScholarTM
Verifica
Altmetric
Comparte
Exporta metadatos
Los elementos en ULPGC accedaCRIS están protegidos por derechos de autor con todos los derechos reservados, a menos que se indique lo contrario.