Deep Learning for Cardiologist-level Myocardial Infarction Detection in Electrocardiograms

From statwiki
Revision as of 20:28, 25 October 2020 by M88yang (talk | contribs)
Jump to navigation Jump to search

Presented by

Zihui (Betty) Qin, Wenqi (Maggie) Zhao, Muyuan Zhao, Amartya (Marty) Mukherjee

Maggie

Introduction

 Problem: To detect risk of heart disease from ECG signals.

 Importance: To provide a correct diagnosis so that proper health care can be given to patients.

 Area: Deep learning – scientific machine learning

o Concerned about design, training, and use of ML algorithm in an optimal matter towards a certain problem.

 Deep learning model benefits:

o Uses multiple GPUs to construct complicated NN that can be trained using TB-sized datasets, which are robust against noise.

o After training is complete, it takes little computational power to conduct statistical inference.

 Question that need to be answered:

o What AI architecture and datasets can provide evidence of symptoms, ECG changes, imaging evidence that shows loss of viable myocardium and wall motion abnormality?

 Purpose of the article:

o To provide a detailed analysis on the contribution of each ECG lead in identifying heart disease in the model. This is because the selection of data in previous studies regarding heart disease made data selection arbitrary in identifying heart conditions.

o To show the use of using multiple data channels of information to enhance prediction accuracy in deep learning, i.e. processing the top three ECG leads simultaneously in the neural network.

o Show that feature engineering is not necessary in the training, validation, or testing process for ECG data in neural networks.

Related work

 Database used: PTB database

1. CNN Network

a. Used both noisy and denoised ECGs without feature engineering.

2. Used artificial neural network, probabilistic neural network, KNN, multi-layer perceptron, and Naïve Bayes Classification

a. Extracted two features: T-wave integral and total integral to identify heart disease.

3. Developed two different ANN: RBF and MLP

4. Supervised learning techniques have limited success to the problem and used multiple instance learning.

a. Demonstrate proposed algorithm LTMIL surpasses supervised approaches.

5. Create new feature by approximating ECG signal using a 20th order polynomial, which achieved 94.4% accuracy.

6. Stationary wavelet transforms to decompose ECG into sub-bands.

a. SVM and KNN used to classify.

b. Features used: sample entropy, normalized sub-bands, log energy entropy, median slope from sub-bands.

7. Transfer learning – used deep CNN model for arrhythmia and developed it to detect heart disease.

8. Simple adaptive threshold (SAT)

a. Multiresolution approach, adaptive thresholding used to extract features: depth of Q peak and elevation in ST segment

9. Subject-oriented approach using CNN to take in leads II, III, AVF

10. Model ECG using 2nd order ODE and feed the best-fitting coefficients of the ECG signal into a SVM.

11. Multi-channel CNN (16 layers) with long-short memory units

12. Deep CNN that takes 3 seconds at a time of lead II as input

 Most use feature extraction/selection from the raw ECG data before training.

o Problem with feature selection is that it is not practical for large volumes of data.

 Other papers that do not use feature selection arbitrarily picks ECG leads for classification and does not provide rationale.

Result

1. Quantification of accuracies for single channels with 20-fold cross-validation, resulting highest individual accuracies: v5, v6, vx, vz, and ii

2. Quantification of accuracies for pairs of top 5 highest individual channels with 20-fold cross-validation, resulting highest pairs accuracies to fed into a the neural network: lead v6 and lead vz

3. Use 100-fold cross validation on v6 and vz pair of channels, then compare outliers based on top 20, top 50 and total 100 performing models, finding that standard deviation is non-trivial and there are few models performed very poorly.

4. Discussing 2 factors effecting model performance evaluation:

1) Random train-val-test split might have effects of the performance of the model, but it can be improved by access with a larger data set and further discussion

2) random initialization of the weights of neural network shows little effects on the performance of the model performance evaluation, because of showing a high average results with a fixed train-val-test split

5. Comparing with other models in other 12 papers, the model in this article has the highest accuracy, specificity, and precision

6. Further using 290 fold patient-wise split, resulting the same highest accuracy of the pair v6 and vz as record-wise split

1) Discuss patient-wise split might result lower accuracy evaluation, however, it still maintain an average of 97.83%

Marty

3.1: Data curation

Dataset: 549 ECG records total

290 unique patients

Each ECG record has a mean length of over 100s

3.2: ANN model

ConvNetQuake model + 1D batch normalization + Label-smoothing

Model (PyTorch):

- Input layer: 10-second long ECG signal

- Hidden layers: 8 * (1D convolution layer, Activation function: RELU, 1D batch normalization layer)

- Output layer: 1280 dimensions -> 1 dimension, Activation function: Sigmoid

Batch size = 10

Learning rate = 10^-4

Optimizer = ADAM

80-10-10: Train-Validation-Test

Muyuan

Discussion

1.The paper introduced a new architecture for heart condition classification based on raw ECG signals using multiple leads. It outperformed the state-of-art model by a large margin of 1 percent.

2.This study finds that out of the 15 ECG channels(12 conventional ECG leads and 3 Frank Leads), channel v6, vz and ii contain the most meaningful information for detecting myocardial infraction.

3.This study also finds that recent advances in machine learning can be leveraged to produce a model capable of classifying myocardial infraction with a cardiologist-level success rate.

4.To further improve the performance of the models, access to larger labelled data set is needed.

5.The PTB database is small. It is difficult to test the true robustness of the model with a relatively small test set.

6.If a larger data set can be found to help correctly identify other heart conditions beyond myocardial infraction, the research group plans to share the deep learning models and develop an open source, computationally efficient app that can be readily used by cardiologists.

Conclusion

1.A detailed analysis of the relative importance of each of the standard 15 ECG channels indicates that deep learning can identify myocardial infraction by processing only ten seconds of raw ECG data from the v6, vz and ii leads and reaches cardiologist-level success rate.

2.Deep learning algorithms may be readily used as commodity software. Neural network model that was originally designed to identify earthquakes may be re-designed and tuned to identify myocardial infraction.

3.Deep learning does not require feature engineering of ECG data to identify myocardial infraction in the PTB database. This model only required ten seconds of raw ECG data to identify this heart condition with cardiologist-level performance.

4.Access to larger database should be provided to deep learning researchers so they can work on detecting different types of heart conditions. Deep learning researchers and cardiology community can work together to develop deep learning algorithms that provides trustworthy, real-time information regarding heart conditions with minimal computational resources.