Please use this identifier to cite or link to this item:
http://hdl.handle.net/10553/110300
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Kaushik, Manoj | en_US |
dc.contributor.author | Baghel, Neeraj | en_US |
dc.contributor.author | Burget, Radim | en_US |
dc.contributor.author | Travieso González, Carlos Manuel | en_US |
dc.contributor.author | Dutta, Malay Kishore | en_US |
dc.date.accessioned | 2021-07-08T09:00:03Z | - |
dc.date.available | 2021-07-08T09:00:03Z | - |
dc.date.issued | 2021 | en_US |
dc.identifier.issn | 1746-8094 | en_US |
dc.identifier.uri | http://hdl.handle.net/10553/110300 | - |
dc.description.abstract | A child has specific language impairment (SLI) or developmental dysphasia (DD) when the speech is delayed or has disordered language development for no apparent reason. As it may be related to loss of hearing, speech abnormality should be diagnosed at an early stage. The existing methods are mainly based on the utterance of vowels and have a high misclassification rate. This article proposes an automatic deep learning model that can be an effective tool to diagnose SLI at the early stage. In the proposed work, raw audio data is processed using Short-time Fourier transform and converted to decibel (dB) scaled spectrograms which are classified using the proposed convolutional neural network (CNN). This approach consists of utterances that contained seven types of vocabulary (vowels, consonant and different syllable Isolated words). A rigorous analysis based on different age-group was performed and a 10-fold Cross-Validation (CV) was done to test the accuracy of the classifier. A comprehensive experimental test reveals that 99.09 % of the children are correctly diagnosed by the proposed framework, which is superior when compared to state-of-the-art methods. The proposed scheme is gender and speaker-independent. The proposed model can be used as a stand-alone diagnostic tool that can assist automatic diagnosis of children for SLI and will be helpful for remote areas where professionals are not available. The proposed model is robust, efficient with low time complexity which is suitable for real-time applications. | en_US |
dc.language | eng | en_US |
dc.relation.ispartof | Biomedical Signal Processing and Control | en_US |
dc.source | Biomedical Signal Processing and Control [ISSN 1746-8094], v. 68, 102798, (Julio 2021) | en_US |
dc.subject | 3314 Tecnología médica | en_US |
dc.subject.other | Developmental dysphasia | en_US |
dc.subject.other | Diagnosis | en_US |
dc.subject.other | Envelop modulation spectra | en_US |
dc.title | SLINet: Dysphasia detection in children using deep neural network | en_US |
dc.type | info:eu-repo/semantics/Article | en_US |
dc.type | article | en_US |
dc.identifier.doi | 10.1016/j.bspc.2021.102798 | en_US |
dc.identifier.scopus | 2-s2.0-85107839032 | - |
dc.contributor.orcid | 0000-0002-5970-7321 | - |
dc.contributor.orcid | 0000-0002-0081-6224 | - |
dc.contributor.orcid | 0000-0003-1849-5390 | - |
dc.contributor.orcid | #NODATA# | - |
dc.contributor.orcid | 0000-0003-2462-737X | - |
dc.investigacion | Ingeniería y Arquitectura | en_US |
dc.type2 | Artículo | en_US |
dc.utils.revision | Sí | en_US |
dc.identifier.ulpgc | Sí | en_US |
dc.contributor.buulpgc | BU-TEL | en_US |
dc.description.sjr | 1,211 | |
dc.description.jcr | 5,076 | |
dc.description.sjrq | Q1 | |
dc.description.jcrq | Q2 | |
dc.description.scie | SCIE | |
dc.description.miaricds | 10,7 | |
item.fulltext | Sin texto completo | - |
item.grantfulltext | none | - |
crisitem.author.dept | GIR IDeTIC: División de Procesado Digital de Señales | - |
crisitem.author.dept | IU para el Desarrollo Tecnológico y la Innovación | - |
crisitem.author.dept | Departamento de Señales y Comunicaciones | - |
crisitem.author.orcid | 0000-0002-4621-2768 | - |
crisitem.author.parentorg | IU para el Desarrollo Tecnológico y la Innovación | - |
crisitem.author.fullName | Travieso González, Carlos Manuel | - |
Appears in Collections: | Artículos |
Items in accedaCRIS are protected by copyright, with all rights reserved, unless otherwise indicated.