Identificador persistente para citar o vincular este elemento:
http://hdl.handle.net/10553/37179
Título: | Adaptive segmentation of multimodal polysomnography data for sleep stages detection | Autores/as: | Procházka, A. Kuchyňka, J. Yadollahi, M. Suárez Araujo, Carmen Paz Vyšata, O. |
Clasificación UNESCO: | 120304 Inteligencia artificial | Palabras clave: | Depth sensors Classification Kinect Image |
Fecha de publicación: | 2017 | Publicación seriada: | International Conference on Digital Signal Processing proceedings | Conferencia: | 2017 22nd International Conference on Digital Signal Processing, DSP 2017 | Resumen: | The paper presents a new algorithm for adaptive classification of sleep stages using multimodal data recorded in the sleep laboratory during overnight polysomnography records. The proposed method includes the learning process applied for the set of individuals with their sleep stages classified by an experienced neurologist. Features evaluated for time windows 30 s long and selected multimodal signals are used for construction and optimization of the proposed two-layer neural network model. Resulting computational system based upon breathing EEG and EOG features is used for analysis of new individuals to detect their sleep stages. Results include classification accuracy higher than 80% and 90% for Wake and REM stages, respectively. The proposed method can adaptively modify model coefficients to detect sleep stages and sleeping disorders using man-machine interaction. | URI: | http://hdl.handle.net/10553/37179 | ISSN: | 2165-3577 | DOI: | 10.1109/ICDSP.2017.8096108 | Fuente: | 2017 22Nd International Conference On Digital Signal Processing (Dsp) [ISSN 1546-1874], (2017) |
Colección: | Actas de congresos |
Los elementos en ULPGC accedaCRIS están protegidos por derechos de autor con todos los derechos reservados, a menos que se indique lo contrario.