Bayesian Network as a Decision Tool for Predicting ALS Disease: Difference between revisions

From statwiki
Jump to navigation Jump to search
Line 42: Line 42:


[[File: BN Network.png | center]]
[[File: BN Network.png | center]]
<div align="center">Figure 2: Bayesian Network model of the dataset</div>


The proposed architecture was implemented through a deep network called GoogLeNet as a submission for ILSVRC14’s Classification Challenge and Detection Challenge.  
The proposed architecture was implemented through a deep network called GoogLeNet as a submission for ILSVRC14’s Classification Challenge and Detection Challenge.  

Revision as of 14:16, 25 November 2021

Presented by

Bsodjahi

Introduction

In order to propose the best decision tool for Amyotrophic Lateral Sclerosis (ALS) prediction, Hasan Aykut et al. presents in the paper a comparative empirical study of the predictive performance of 8 supervised Machine Learning classifiers, namely, Bayesian Network, Artificial Neural Networks, Logistic Regression, Naïve Bayes, J48, Support Vector Machines, KStar, and K-Nearest Neighbor. With a dataset consisting of blood plasma protein level and independent personal features, for each classifier they predicted ALS patients and found that Bayesian Network offers the best results based on various metrics such as accuracy (88.7%) and 97% for Area Under the Curve (AUC).

Our summary of the paper commences with the review of previous works underpinning its motivation, next we present the dataset and the approach the authors used, then we analyze the results which is finally followed by the conclusion.

Previous Work and Motivation

ALS is a nervous system disease that progressively affects brain nerve cells and spinal cord, impacting the patient's upper and lower motor autonomy in the loss of muscle control. Its origin is still unknown, though in some instances it is thought to be hereditary. Sadly, at this point of time, it is not curable and the progressive degeneration of the muscles cannot be halted once started [1] and inexorably results in patient's passing away within 2-5 years [2].

The symptoms that ALS patients exhibit are not distinctively unique to ALS, as they are similar to a host of other neurological disorders. Furthermore, because the impact on patient's motor skill is usually not noticeable in early stage [3], diagnosis at that time is a challenge. One of the main diagnosis protocols, known as El Escorial criteria, involves a battery of tests taking 3-6 months. This is a considerable amount of time since a quicker diagnosis would allow earlier medical monitoring, conducive to patient's life conditions improvement with the possibility ofan extended survival.

Given the need of a more timely but effective diagnosis, the authors of this paper proposed to bring to contribution the application of Machine Learning to identify among a list of candidates, the approach that yields the most accurate prediction.

Dataset

The table below shows an overview of the patient's features for the data which were collected in a prior experimental research. There are overall 204 data points of which about 50% are from ALS patients and the rest consists of Parkinson's patients, Neurological Control group patients, and also healthy participants Control group.

Study Methods

Figure 1 below shows the global architecture of the modelling process in the comparative machine learning performance study. Though all part are important, the last two are much of of interest to us and we will focus more on those.

Figure 1: Modelling process with machine learning methods

We provide here an overview of Bayesian Networks, since in the setting of our course it was not covered. Bayesian Networks are graph based statistical models that represent probabilistic relationships among variables and are mathematically formulated as:

They are also referred to as Directed Acyclic Graphs (DAG), i.e., graphs composed of nodes representing variables and arrows representing the direction of the dependency. BN are easily interpretable, especially for non-technical audience and are well adopted in the areas of biology and Medicine [14-17].

Generally, the choice of an algorithm is informed by the nature of the dataset which also suggests the most appropriate performance evaluation criteria of the technique. For instance, the dataset in this study is characterized on one hand by 4 classes (vs. the typical 2-class) and on the other by the unbalance in the number of participants in each group. Because of this latter characteristic, Hasan Aykut et al. included in their evaluation criteria the Geometric Mean and the Youden' index, since these 2 are known to resist the impact of unbalanced data on the performance evaluation. Table 2 below shows the evaluation criteria formulas.

Results Analysis

The resulting Bayesian Network from the study is presented below and visually shows the decency of the class prediction, i.e., Patient Type, on all the features, some of which have dependency among themselves as well. For example, we can see that the number of patience is dependent on age.

Figure 2: Bayesian Network model of the dataset

The proposed architecture was implemented through a deep network called GoogLeNet as a submission for ILSVRC14’s Classification Challenge and Detection Challenge.

The classification challenge is to classify images into one of 1000 categories in the Imagenet hierarchy. The top-5 error rate - the percentage of test examples for which the correct class is not in the top 5 predicted classes - is used for measuring accuracy. The results of the classification challenge is shown in Table 1. The final submission of GoogLeNet obtains a top-5 error of 6.67% on both the validation and testing data, ranking first among all participants, significantly outperforming top teams in previous years, and not utilizing external data.

Image: 500 pixels
Image: 500 pixels
Table 1: Classification performance

The ILSVRC detection challenge asks to produce bounding boxes around objects in images among 200 classes. Detected objects count as correct if they match the class of the groundtruth and their bounding boxes overlap by at least 50%. Each image may contain multiple objects (with different scales) or none. The mean average precision (mAP) is used to report performance. The results of the detection challenge is listed in Table 2. Using the Inception model as a region classifier, combining Selective Search and using an ensemble of 6 CNNs, GoogLeNet gave top detection results, almost doubling accuracy of the the 2013 top model.

Image: 600 pixels
Image: 600 pixels
Table 2: Detection performance

Conclusion

Googlenet outperformed the other previous deep learning networks, and it became a proof of concept that approximating the expected optimal sparse structure by readily available dense building blocks (or the inception modules) is a viable method for improving the neural networks in computer vision. The significant quality gain is at a modest increase for the computational requirement is the main advantage for this method. Even without performing any bounding box operations to detect objects, this architecture gained a significant amount of quality with a modest amount of computational resources.

Critiques

By using nearly 5 million parameters, GoogLeNet, compared to previous architectures like VGGNet and AlexNet, reduced the number of parameters in the network by almost 92%. This enabled Inception to be used for many big data applications where a huge amount of data was needed to be processed at a reasonable cost while the computational capacity was limited. However, the Inception network is still complex and susceptible to scaling. If the network is scaled up, large parts of the computational gains can be lost immediately. Also, there was no clear description about the various factors that lead to the design decision of this inception architecture, making it harder to adapt to other applications while maintaining the same computational efficiency.

-

References

[1] Rowland, L.P.; Shneider, N.A. Amyotrophic Lateral Sclerosis. N. Engl. J. Med. 2001, 344, 1688–1700. [CrossRef] [PubMed]

[2] Ross B. Girshick, Jeff Donahue, Trevor Darrell, and Jitendra Malik. Rich feature hierarchies for accurate object detection and semantic segmentation. In Computer Vision and Pattern Recognition, 2014. CVPR 2014. IEEE Conference on, 2014.

[3] Sanjeev Arora, Aditya Bhaskara, Rong Ge, and Tengyu Ma. Provable bounds for learning some deep representations. CoRR, abs/1310.6343, 2013.

[4] ¨Umit V. C¸ ataly¨urek, Cevdet Aykanat, and Bora Uc¸ar. On two-dimensional sparse matrix partitioning: Models, methods, and a recipe. SIAM J. Sci. Comput., 32(2):656–683, February 2010.

Footnote 1: Hebbian theory is a neuroscientific theory claiming that an increase in synaptic efficacy arises from a presynaptic cell's repeated and persistent stimulation of a postsynaptic cell. It is an attempt to explain synaptic plasticity, the adaptation of brain neurons during the learning process.

Footnote 2: For more explanation on 1 by 1 convolution refer to: https://iamaaditya.github.io/2016/03/one-by-one-convolution/