User:A29mukhe: Difference between revisions

From statwiki
Jump to navigation Jump to search
(Created page with "e")
 
No edit summary
 
(5 intermediate revisions by 2 users not shown)
Line 1: Line 1:
e
 
== Marty ==
 
3.1: Data curation
 
Dataset: 549 ECG records total
290 unique patients
Each ECG record has a mean length of over 100s
 
3.2: ANN model
 
ConvNetQuake model + 1D batch normalization + Label-smoothing
 
Model (PyTorch):
- Input layer: 10-second long ECG signal
- Hidden layers: 8 * (1D convolution layer, Activation function: RELU, 1D batch normalization layer)
- Output layer: 1280 dimensions -> 1 dimension, Activation function: Sigmoid
 
Batch size = 10
Learning rate = 10^-4
Optimizer = ADAM
 
80-10-10: Train-Validation-Test
 
 
 
==Betty==
 
Result:         
 
1. Quantification of accuracies for single channels with 20-fold  cross-validation, resulting highest individual accuracies: v5, v6, vx, vz, and ii
 
2. Quantification of accuracies for pairs of top 5 highest individual channels with 20-fold  cross-validation, resulting highest pairs accuracies to fed into a the neural network: lead v6 and lead vz
 
3. Use 100-fold cross validation on v6 and vz pair of channels, then compare outliers based on top 20, top 50 and total 100  performing models, finding that standard deviation is non-trivial and there are few models performed very poorly.
 
4. Discussing 2 factors effecting model performance evaluation:
 
1) Random train-val-test split might have effects of the performance of the model, but it can be improved by access with a larger data set and further discussion
 
2)  random initialization of the weights of neural network shows little effects on the performance of the model performance evaluation, because of showing a high average results with a fixed train-val-test split
 
5. Comparing with other models, the model in this article has the highest accuracy, specificity, and precision
 
6. Further using 290 fold patient-wise split, resulting the same highest accuracy of the pair v6 and vz as record-wise split
 
1) Discuss patient-wise split might result lower accuracy evaluation, however, it still maintain an average of 97.83%

Latest revision as of 20:12, 25 October 2020

Marty

3.1: Data curation

Dataset: 549 ECG records total 290 unique patients Each ECG record has a mean length of over 100s

3.2: ANN model

ConvNetQuake model + 1D batch normalization + Label-smoothing

Model (PyTorch): - Input layer: 10-second long ECG signal - Hidden layers: 8 * (1D convolution layer, Activation function: RELU, 1D batch normalization layer) - Output layer: 1280 dimensions -> 1 dimension, Activation function: Sigmoid

Batch size = 10 Learning rate = 10^-4 Optimizer = ADAM

80-10-10: Train-Validation-Test


Betty

Result:

1. Quantification of accuracies for single channels with 20-fold cross-validation, resulting highest individual accuracies: v5, v6, vx, vz, and ii

2. Quantification of accuracies for pairs of top 5 highest individual channels with 20-fold cross-validation, resulting highest pairs accuracies to fed into a the neural network: lead v6 and lead vz

3. Use 100-fold cross validation on v6 and vz pair of channels, then compare outliers based on top 20, top 50 and total 100 performing models, finding that standard deviation is non-trivial and there are few models performed very poorly.

4. Discussing 2 factors effecting model performance evaluation:

1) Random train-val-test split might have effects of the performance of the model, but it can be improved by access with a larger data set and further discussion

2) random initialization of the weights of neural network shows little effects on the performance of the model performance evaluation, because of showing a high average results with a fixed train-val-test split

5. Comparing with other models, the model in this article has the highest accuracy, specificity, and precision

6. Further using 290 fold patient-wise split, resulting the same highest accuracy of the pair v6 and vz as record-wise split

1) Discuss patient-wise split might result lower accuracy evaluation, however, it still maintain an average of 97.83%