U-Time:A Fully Convolutional Network for Time Series Segmentation Applied to Sleep Staging Summary: Difference between revisions

From statwiki
Jump to navigation Jump to search
Line 16: Line 16:


Across all datasets, U-net had a high performance score similar to or higher than any known state of the art automated method specifically designed for that data set and the baseline.
Across all datasets, U-net had a high performance score similar to or higher than any known state of the art automated method specifically designed for that data set and the baseline.
[[File:Results.png]]

Revision as of 20:53, 25 November 2021

Introduction

During sleep, the brain goes through different sleep stages, each characterised by brain and body activity patterns. Stages can be determined by measurements in a so called polysomnography study (PSG), which includes measurements of brain activity by EEG, eye movement and facial muscle activity. The process of mapping the transitions between sleep stages is called sleep staging and provides the basis for diagnosis of sleeping disorders. Traditionally, sleep staging is done manually by splitting the measurements of a PSG into 30s segments, each containing multiple channels of data, and classifying the segments individually. Since this requires a lot of expertise and time, automatization is of interest. Fast and reliable automated sleep staging could help with diagnosis and help find novel biomarkers for disorders.

State of the art sleep staging classifiers employ convolutional and recurrent layers. The problem with recurrent neural nets is that they can be difficult to tune and optimize and might need hyperparameter tuning to be suitable for different data sets. This means they are often specially trained to be applied on one dataset alone and might be difficult to use for non-experts in a more general setting.

This paper introduces U-time, a feed-forward convolutional network for sleep staging, which treats segmentation similar to how the popular image classifier U-net treats image segmentation. It does not need hyperparameter or architectural tuning to be applied to variable data sets, and it is able to classify sleep stages at any temporal resolution.

Results

U-net was applied to 7 different PSG datasets with fixed architecture and hyperparameters, so there was no data-specific tuning. Furthermore, U-net only received one EEG channel as input.

The performance of U-net was compared to known models trained for use on a specific data set where available. As a baseline measure, the authors use an improved version of DeepSleepNet, which employs convolutional and recurrent layers and was designed to be applicable to different data sets. In the table summarising the results, this model is denoted by CNN-LSTM (LSTM stands for long short term memory, an example of a recurrent architecture).

Across all datasets, U-net had a high performance score similar to or higher than any known state of the art automated method specifically designed for that data set and the baseline.