Document Type : Original Article

Authors

1 Cardiologist, Al Qassimi Hospital, Emirates Health Services, UAE

2 Cardiologist, Heart, Vascular & Thoracic Institute at Cleveland Clinic Abu Dhabi, UAE

3 Cardiac Anesthesia, Al Qassimi Hospital, Emirates Health Services, UAE

4 Cardiologist, and CEO of Al Qassimi Hospital, Al Qassimi Hospital, Emirates Health Services, UAE

Abstract

A blockage of the blood vessels feeding the area causes ischemia, which is defined as a localized absence of blood flow. If an organ is not getting enough oxygen and blood flow, such as the heart, or brain it is said to be ischemic. To describe the progress made in the detection, characterization, and prediction of cardiac ischemia using Machine Learning (ML)-based Artificial Intelligence (AI) processes including together Single Photon Emission Computed Tomography (SPECT) and Positron Emission Tomography (PET). In the relatively recent past, the use of machine learning algorithms in the area of cardiology has increasingly centered on image processing for the goals of diagnosis, prognosis, and type identification analysis. The main objective of this study was to improve Nuclear Cardiology (NC) images for cardiac ischemia patients using Image Processing techniques. Clinical research is being significantly changed by the AI application. Through the examination of very big datasets and the recent convergence of potent ML algorithms and rising computer capacity, it has been shown that experimental categorization as well as prediction may be improved through examining extremely high-dimensional non-linear features. Machine learning is improving the identification of perfusion abnormalities in myocardial ischemia and predicting adverse cardiovascular events at the patient level. 

Graphical Abstract

Survey on Enhancement of Nuclear Cardiology Images Based on Image Processing Techniques for Diagnosis of Ischemic Patients

Keywords

Introduction

Artificial intelligence has transformed how we engage with the vast deluge of data enabled by contemporary automated processes, connectivity, and storage systems. One reason for this meteoric rise in AI's widespread use is the advent of cutting-edge ML algorithms like convolutional neural networks, and another is the confluence in recent years of enormous computing power, vast troves of readily available training data, a plethora of real-world applications, and a wealth of real-world use cases [1]. Figure 1 displays the AI, machine learning, and deep learning conceptual model.

In reality, ML has been used in some very innovative ways in the medical field, such as in the classification of skin lesions as benign or malignant and the immediate processing of funduscopic retinal images for the diagnosis of diabetic retinopathy. Cardiac ischemia has long been NC mainstay and the primary method for identifying obstructive coronary artery disease [2]. Every year, almost 7 million people get cardiac ischemia. The range of cardiac ischemia continues to grow as radioactive imaging technology develops. For instance, the switch from planar to tomographic imaging increased cardiac ischemia predictive power. As gated left ventricular ejection fraction became a possibility, it further supplied predictive information with increased precision in patient management. ML is in a position to further improve radionuclide cardiac ischemia diagnostic efficacy. ECG data processing for the aim of finding abnormalities in the conduction system was the primary focus of the early machine learning implementations in cardiology, while more recently approaches have been put into practice to diagnose arrhythmias. At this time, it is becoming abundantly clear that cardiovascular imaging, and in particular NC (SPECT and PET cardiac perfusion imaging), constitutes a particular niche for the application of artificial intelligence [3]. Using ML-based AI processes in together SPECT with PET imaging, this study aims to offer an overview of the foundations underpinning AI and ML, the advances made in identifying and characterizing myocardial ischemia, as well as the data about the prognosis of ischemia-related effects.

The remaining sections of the manuscript is organized as follows: Section 2- Artificial intelligence and machine learning, section 3- Deep learning, section 4- Recent development in image processing, section 5- Use of AI in NC, section 6- Maximizing Cardiac Ischemia Recognition and Characterization, section 7- Refinement of Prognostic Estimates in cardiac Ischemia, section 8- Potential Limitation of ML in NC, section 9- Prospects for future, and section 10- conclusion.

Artificial intelligence and machine learning

The AI theory has seeped into every facet of computer science since its inception. It originated with the idea that a system may be trained to display "human-level" intelligence by solving problems and doing tasks that previously required human intelligence. Because of this, the first attempts in AI implementations consisted of straight rule programming for a limited set of tasks [4].

Using such a method, it was quickly clear that the vast number of rules that were already there to handle incoming data would be a constraint for AI jobs. As a result, improvement could not be a result of contact toward new records since an AI system would only be able to operate successfully when confronted with situations that were immediately identifiable within its library of already thought-out action rules. Early AI implementations' somewhat static character eventually led to "cooling" expectations for the technology [5]. The advent of contemporary ML algorithms, which gave the field a dynamic and responsive feel, near the close of the 20th century, reignited interest in AI. ML refers to the group of mathematical algorithms that, given enough data, would provide optimal results for a given function. AI is a wide term, and it typically presents the idea of computers doing tasks that need a certain (human) point of cognition. The capacity of ML algorithms to continuously enhance their performance via exposure to vast volumes of (training) data is what distinguishes them from other types of algorithms. ML algorithms initially attempt to accurately evaluate a dependent variable using mostly ineffective, sometimes random, parameters (outcome). Table 1 indicates the description of machine learning methods.

Several algorithms improve these parameters to lower mistake rates. Following each phase of data exposure, the ML model parameters are optimized or adjusted to reduce the estimate of error [6]. This pattern of action characterizes the concept of learning central to ML. Those patterns, if discovered, are then utilized to train the model to do the desired job better. Therefore, the ML model is tested on an isolated dataset to assess how well it performs with new, untrained data, a process known as "generalization". There are several instances of effective ML algorithms, including random forests, boosted ensembles, and support vector machines.

Convolutional neural networks, which will be covered in more detail below under the term Deep Learning, are, nonetheless, the most noteworthy recent example of very effective ML algorithms. It was mentioned that the present wave of AI proliferation has been made feasible by the confluence of four variables. Due to increased interconnectivity, which made it easier to remember and make extremely huge and dynamic datasets available. The large-scale computing power has been the second component. Modern graphics processing units (GPUs), for example, have processing units with constantly expanding computing capacity thanks to the ongoing innovation in electrical technologies. These GPUs today enable the completion of enormous computational jobs within the reasonable timeframes. The ongoing development of new approaches and optimization strategies to boost the functionality of these algorithms is the third component. The availability of frameworks, which enables more users and academics to experiment with this technology, is the fourth element. The best use cases for Ml techniques are complicated activities requiring the understanding of high-dimensional patterns for the recognition of identified groupings or information patterns [7].

Scope of ML

As a field, NC has limitless potential for ML. ML may be able to integrate patient information with imaging data to deliver tailored therapy suggestions for each individual or offer precision medicine. ML has to learn in a certain manner if it is to repeat findings that are comparable to human equivalents. To analyze the data and reveal hidden patterns, ML makes many tries. As the data set grows, so do ML's capabilities in a proportionate manner. However, the features interpretability might be lost as data collection becomes bigger. Supplied, unsupplied, and reinforcement approaches make up machine learning. Supervised machine learning is often utilized in NC [8].  A dataset comprising classes or outcomes is used in supervised learning. Unsupervised learning finds hidden correlations in data collection when it is applied to datasets containing labels or annotations. Several reward criteria used in reinforcement learning are analogous to those in human psychology. Deep learning is rapidly gaining popularity in the area of cardiology within unsupervised learning. It employs several layers of cells that resemble the neural networks seen in the human brain. Due to major advancements in computing power and cloud infrastructure, this specific sector is flourishing. ML aims to learn from the data without any supervision whereas classic statistical approaches use labels to find an association between variables. Researchers need to be aware of the extensive overlap between the fields of ML and traditional statistics. ML is dynamic and, in many respects, more like the actual world than the traditional techniques, which are static. Nevertheless, statistics do not excel in prediction [9]. In contrast, ML systems have been highly successful in producing data-driven predictions from data sets. Distributions of ML methods are represented in Figure 2.

Deep learning

The ML method known as neural networks were influenced by the structure of organic synapse connections. They are composed of input, hidden, and output layers in their most basic form. Large numbers of features, such as the factors for each pixel in an image, may be accommodated via input layers. The data from the input layer is then integrated and processed in a variable number of units in hidden layers, which pass the processed data on to the next layer until it reaches an output layer.

A cost function that measures our model's performance and is specifically adapted to the task's purpose is used in the output layer. The categorization of the input samples into several categories, the forecasting of a continuous result, or the recognition of a particular item in an image are a few examples of this activity. Figure 3 represents the schematic representation of a deep convolutional neural network.

Convolutions, one of the many processing tasks that may be given to hidden layers, have played a significant role in creating networks that are exceptionally well-suited for image identification. By moving kernels over the image, convolutions enable the network to develop position-invariant features. These kernels begin with arbitrary parameters, which are then adjusted at each iteration to minimize a certain cost error. The process by which an error is reduced by reversing its path from the output to the input is called backpropagation. By merging the features created at the previous layers, the network can build the additional conceptual features to gather relevant data to finish the task with each successive layer [10]. Hence, the increasing number of hidden layers in current neural networks has become a frequent characteristic. The depth of the networks rises with each additional processing layer. Thus, the term "Deep Learning" has quickly become the working name for various ML methods. Utilizing deep learning ML in NC is a potential area for future study. In contrast to the supervised methods, deep learning gradually gathers information across several "layers" that is analogous to neurons. With massive data sets, most ML techniques hit their limitations, whereas deep learning becomes exponentially better. It can accurately forecast cardiac death and extract useful information from diverse data. Automated transform by manifold approximation (AUTOMAP), a new development, can recreate images from several modalities, including PET scans [11]. No specialist is required. This may help patients get less radiation exposure during the SPECT scans and provide high-quality images. Perfusion SPECT images and image representation on polar maps may both benefit from deep learning.

Recent developments in image processing

The usefulness and potential of neural networks may be improved by exploring other topologies, which have been the subject of an investigation by several research teams. The U-Net design is often used for segmentation since it labels each input picture pixel with a class. The segmentation supports many U-Net topologies. To put it simply, a U-Net consists of an encoding unit, a decoding unit, and the connections between them. The encoder applies a series of convolutions on the input picture, decreasing the output's dimensionality after each operation. Using the encoder's output, the decoder builds the input with each operation and ultimately aims to reconstruct the original picture. The duty of the decoder is simplified by the connections between the encoder and the decoder, which feed data about the original picture at different processing stages. By successfully executing such tasks, deep neural networks have shown their utility. Neural networks, according to experts, offer new applications in areas where Deep Learning has been dominant. For instance, the images should be taken that current Generative Adversarial Networks create (GANs) [12]. The concept of competing networks is the foundation of GANs. One such network (D) uses a classifier-like approach to try to distinguish real photos from false ones. The objective of the subsequent network (G) is to create noise images with high similarity to the source images. For instance, in the field of medical imaging, competition between the two networks might help hone their abilities until one of them, denoted as (G), can produce unique images that differ significantly from the original while still falling within the score range of genuine images. There are several potential explanations for this, since it may be difficult to gather huge datasets of images with specified qualities in medical imaging owing to issues including expense, low illness prevalence, or an absence of patient consent. Hence, this method has helped with MR imaging, lung nodules, and skin lesions [13].

AI application in nuclear cardiology

We have shown how contemporary ML-based AI is opening up new possibilities for sophisticated estimate optimization and data analysis. It is crucial to comprehend that the kind and volume of data provided play a crucial role in choosing the ML algorithm that will be used. Huge volumes of numerical data have been generated in the field of cardiology via blood biomarker tests, genetic studies, and electronic health records, whereas both invasive and noninvasive imaging methods, provide immediate image data [14]. Machine learning-based distributions of image-based diagnostic applications per disease and per modality are represented in Figures 4 and 5.

Due to its greater radioactive count rates, higher spatial resolution, and reduced radiation load, PET imaging has a better performance profile. PET imaging gives improved picture quality, yet, most recent NC information has been gathered with SPECT because to its greater accessibility [15].

While the majority of NC data has been operationalized to convert image findings into structured numerical datasets [16], the SPECT and PET results are complex images for which Deep Learning is the most suited method to execute. The AI analysis of these datasets has been accomplished at a cheaper computing cost using ML methods other than Deep Learning. Full Deep Learning implementation is however starting to provide some intriguing outcomes. The next part will provide a summary of the evidence supporting the use of ML-based AI to analyze numerical and image data.

Maximizing cardiac ischemia recognition and characterization

Nuclear imaging has historically been used to diagnose coronary artery disease (CAD), with the visual interpretation of doctors being prone to a broad range of variability depending on the reader's clinical expertise. The accuracy of automated CAD identification has been improved in early investigations using SPECT cardiac perfusion imaging (MPI) images as input. Surprisingly, SVM surpassed all quantitative evaluations as well as the two readers' visual interpretations. To increase the diagnostic precision of SPECT MPI, various characteristics were effectively merged in this research for the first time. The incorporation of new data and more in-depth analysis was a logical extension of the earlier work [17]. In reality, using the ECG Diamond-Forrester criteria, the subsequent research collected clinical data on age, sex, and the likelihood of having obstructive coronary artery disease, and merged it with quantitative perfusion factors using a machine learning algorithm (LogitBoost). In contrast to both quantitative TPD and ML which did not employ clinical data, the authors showed better CAD identification accuracy. This was a major discovery since it showed that the created ML process gave non-quantitative clinical factors substantial weight in a stepwise orderly manner. Nevertheless, a further innovation was made a few years later when researchers created a Deep Learning model that could immediately detect anomalies from SPECT images by imitating the interpretation of specialists. More precisely, the scientists looked at potential ischemia, rest, and stress abnormalities using stress and rest images and their differences. The resultant program was able to identify more than 5000 potential locations, which nuclear cardiologists classified as pathological or normal. The neural network was trained using these final evaluations of the candidate areas and other clinical characteristics. In a similar vein, Bentacur et al. fed and trained a Deep Learning network using raw and quantitative polar maps from SPECT MPI images [18]. Figure 6 illustrates the NC data or image processing workflow and supplementary variable integration.

Such a model initially took several characteristics from the photos and moved them to a second phase included three completely linked layers. The output from each layer was a score for the identification of obstructive disease in the regions of the Left Anterior Descending (LAD), left circumflex (LCx), and right coronary (RCA) arteries [19]. The research indicated that CAD diagnosis was more accurate than traditional quantitative TPD, both on an individual patient and vascular level [20]. Therefore, it is obvious how ML has a high potential to assist doctors in identifying cardiac ischemia caused by CAD. Traditional statistical approaches and visual perception are constrained by the input of a particular source of data and the author's perspective, accordingly. Statistical, medical, and imaging data cannot be effectively integrated without a more targeted and creative approach. This is now possible with the help of ML algorithms, and DL in particular conveys the most potential in immediate image analysis and identification [21].

Refinement of Prognostic Estimates in cardiac Ischemia

It is difficult to predict cardiac ischemia-related Major Adverse Cardiovascular Events (MACE), and it is becoming increasingly obvious that a wide range of data, including data from demographic, clinical, diagnostic, and quantitative nuclear imaging studies, should be taken into account. Clinicians are under pressure to produce the best prediction result, ideally without resorting to unnecessary invasive procedures or underestimating the risk of outcomes. One possible solution is to use AI-supported systems for integrating these variables and images [22]. ML has shown to be a far more effective technique for risk prediction based on the synthesis of multiple, and often freely available data. In the first trial, revascularization in patients with suspected CAD was predicted using clinical information paired with stress and rest TPD from SPECT MPI. Compared with traditional ischemia quantitative TPD, one out of two visual experts, and an algorithm trained with just stress data, a LogitBoost ML algorithm combining rest and stress TPD data predicted revascularization more accurately. Moreover, the authors showed a strong association between the ML-predicted MACE and the actual MACE when the ML scores were broken down into percentiles. By narrowing their focus to the very worst 5 percentiles, they also found a substantial population of patients who had been previously classified as "normal" based on visual and physical measures. In addition, a complete reclassification of five MACE risk groups resulted in a 30% improvement in MACE prediction and a 5% reduction for patients who were MACE-free. This highlights the ML capability to precisely identify the most crucial factors. A very intriguing tool that might assist physicians in better understanding ML reasoning was presented, showing the role of each factor in establishing the risk score for each patient. Juarez-Orozco et al. succeeded in going one step further. As a means of forecasting the onset of global and localized myocardial ischemia using PET imagery, we employed a LogitBoost model trained using clinical and demographic information. They immediately contrasted their study, which took into account the factors supported by the ESC guideline models, to a conventional logistic regression method. Both the Gender and the SCORE variables, which were examined using both ML and conventional logistic regression, outperformed ML. This research focused on the ML utility in selecting features for the improvement of new ML models and identifying individuals for whom an expensive treatment like PET may be beneficial. Overall, our results lend credibility to the notion that ML-based AI might be used for the vitally crucial tasks of risk classification and MACE prediction [23]. Practitioners may be capable of making improved judgments when screening individuals with suspected or confirmed ischemic if the process of detecting ischemic is automated for risk prediction and enhancing patient choice for both surgical and non-surgical treatment options [24].

Potential Limitation of ML in Nuclear Cardiology

While ML has boundless promise in NC and will inevitably play a part in patient treatment, several challenges should be answered before it can be successfully used in clinical care. Extensive training with complex data sets is necessary for ML algorithms to operate correctly and effectively. The acquisition of huge data sets presents a variety of challenges. The datasets should be initially exchanged across universities and de-identified. Second, it might take a while to get several institutional review board clearances merely to share data sets. The public availability of big data sets may make it possible to train ML algorithms. This is crucial for Deep Learning ML subtype in particular. For data standardization, some kind of global standard is required. While digital imaging and communications in medicine (DICOM) and the picture archiving and communications system (PACS) are very helpful for imaging data, there are some differences across institutions. Each institution may use a different categorization system, adhere to a different set of rules, or use a separate set of acquisition processes. It is crucial to have a common or comparable coding system for data standardization to support the future expansion of ML in NC. It is beneficial to integrate patient data with imaging data to boost the ML architecture's precision. Nevertheless, many institutions do not use the same interface for clinical data from EMR and imaging software. It might be laborious to manually enter all the necessary medical data into imaging or machine learning systems. A smooth transition that can enhance machine learning training at institutions may be facilitated by some kinds of simpler data mining and exchange across these two interfaces. For all aspiring ML facilities, it is a blessing that the American Society of NC is compiling patient data into an image guide registry. The inability of machine learning to adapt to change is another important characteristic. It is challenging for the algorithm to accept patient or imaging information that has been previously saved, but it has changed over time [25]. To enable the ML algorithm to identify and integrate changes in patient or imaging information, some kind of external validation is required. The multi-center registries may require exposing the ML algorithm to a large range of features [26].

Prospects for the future

In the context of NC diagnostic and prognosis optimization, Deep Learning has undoubtedly produced the most impressive outcomes [27]. Nonetheless, certain issues continue to present difficulties that need fresh approaches to be used to be resolved among which the unsupervised learning is concerned. Several experts think that overcoming unsupervised learning will result in a fundamental transformation in the ways that AI is used [28]. The bulk of the data we have at our disposal lacks labeling. Algorithms might be capable of learning from every information resource beyond the limitations of human perceptions if they were able to comprehend the broad and fine-grained patterns of this data alone without continual supervision that enables humans to modify their settings [29]. Learning with just some supervision is called semi-supervision. To accomplish a task, like classification, in this case, less tagged data are employed with a larger number of unlabeled ones. Although the set of inputs is used to determine categories, the training dataset may be utilized to spot patterns and get insight into the data's general structure. The limits of AI, ML, and deep learning are constantly being pushed by a vast number of researchers via small and incremental advancements. Another emerging area is that of hybrid imaging, in which acquisitions of PET or SPECT and CT or MR may be made concurrently or sequentially if ML can take into account the structural relationship between the findings of different methods [30]. From our perspective, the future of NC can only bring about advancements in every area. However, for computers to comprehend the structures of unlabeled data and develop true, reliable AI, a revolutionary new concept is still required.

Conclusion

Nuclear cardiology methods like SPECT and PET imaging, which are used to diagnose cardiac ischemia and forecast harmful and perhaps fatal cardiovascular events, are certain to undergo significant change as a result of ML-based AI. With training on massive datasets, machine learning algorithms may find and use complicated data patterns to enhance these operations. DL is of special relevance in NC since it allows for direct analysis of cardiac images for the diagnosis and definition of myocardial ischemia and the danger of its related consequences. AI is not only desirable, but also it is essential for the future of NC as the orbits of autonomous quantification and ML gets ever closer together. To reach the full potential of machine learning, several challenges should be overcome, including those related to validation, data exchange, legal, financial, and the execution of several processes. The huge potential of ML has the potential to improve medical practice and patient care.

 

Disclosure Statement

No potential conflict of interest was reported by the authors.

Funding

This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.

Authors' Contributions

All authors contributed to data analysis, drafting, and revising of the paper and agreed to be responsible for all the aspects of this work.

ORCID

Loai Abudaqa

https://orcid.org/0009-0005-7077-3978

Nabil Al-Swari

https://orcid.org/0009-0004-8711-7216

Shady Hegazi

https://orcid.org/0009-0005-7550-3168

Osama Alyasseen

https://orcid.org/0009-0001-2009-9605

Maro Gharbi

https://orcid.org/0000-0002-0650-4643

Farah Alaila

https://orcid.org/0009-0006-0937-449X

Hadeel Lozon

https://orcid.org/0009-0005-1053-748X

Arif Al Nooryani

https://orcid.org/0000-0002-0329-795X

HOW TO CITE THIS ARTICLE

Loai Abudaqa, Nabil Al-Swari, Shady Hegazi, Osama Alyasseen, Maro Gharbi, Farah Alaila, Hadeel Lozon, Arif Al Nooryani. Survey on Enhancement of Nuclear Cardiology Images Based on Image Processing Techniques for Diagnosis of Ischemic Patients. J. Med. Chem. Sci., 2023, 6(9) 2186-2197

DOI: https://doi.org/10.26655/JMCHEMSCI.2023.9.24   

URL: http://www.jmchemsci.com/article_170398.html

 

[1]. Juarez-Orozco L.E., Martinez-Manzanera O., Nesterov S.V., Kajander S., Knuuti J., The machine learning horizon in cardiac hybrid imaging, European Journal of Hybrid Imaging, 2018, 2:1 [Google Scholar], [Publisher]
[2]. AlMujaini H., Hilmi M., Abudaqa A., Alzahmi R., Corporate foresight organizational learning and performance: The moderating role of digital transformation and mediating role of innovativeness in SMEs, International Journal of Data and Network Science, 2021, 5:703 [Crossref], [Google Scholar], [Publisher]
[3]. Keel S., Li Z., Scheetz J., Robman L., Phung J., Makeyeva G., Aung K., Liu C., Yan X., Meng W., Guymer R., Development and validation of a deep‐learning algorithm for the detection of neovascular age‐related macular degeneration from colour fundus photographs, Clinical & Experimental Ophthalmology, 2019, 47:1009 [Crossref], [Google Scholar], [Publisher]
[4]. Thilagavathy R., Srivatsan R., Sreekarun S., Sudeshna D., Priya P.L., Venkataramani B., Real-time ECG signal feature extraction and classification using support vector machine, In 2020 international conference on contemporary computing and applications (IC3A) (pp. 44-48), IEEE, 2020 [Crossref], [Google Scholar], [Publisher]
[5]. Mincholé A., Camps J., Lyon A., Rodríguez B., Machine learning in the electrocardiogram, Journal of electrocardiology, 2019, 57:S61 [Crossref], [Google Scholar], [Publisher]
[6]. Hannun A.Y., Rajpurkar P., Haghpanahi M., Tison G.H., Bourn C., Turakhia M.P., Ng A.Y., Cardiologist-level arrhythmia detection and classification in ambulatory electrocardiograms using a deep neural network, Nature medicine, 2019, 25:65 [Crossref], [Google Scholar], [Publisher]
[7]. Haro Alonso D., Wernick M.N., Yang Y., Germano G., Berman D.S., Slomka P., Prediction of cardiac death after adenosine myocardial perfusion SPECT based on machine learning, Journal of Nuclear Cardiology, 2019, 26:1746 [Crossref], [Google Scholar], [Publisher]
[8]. Juarez-Orozco L.E., Knol R.J., Sanchez-Catasus C.A., Martinez-Manzanera O., Van der Zant F.M., Knuuti J., Machine learning in the integration of simple variables for identifying patients with myocardial ischemia, Journal of Nuclear Cardiology, 2020, 27:147 [Crossref], [Google Scholar], [Publisher]
[9]. Zhang H., Lu G., Zhan M., Zhang B., Semi-supervised classification of graph convolutional networks with Laplacian rank constraints, Neural Processing Letters, 2021, 1-12. [Crossref], [Google Scholar], [Publisher]
[10]. Chuquicusma M.J., Hussein S., Burt J., Bagci U., How to fool radiologists with generative adversarial networks? A visual turing test for lung cancer diagnosis, In 2018 IEEE 15th international symposium on biomedical imaging (ISBI 2018) (pp. 240-244), IEEE, 2018 [Crossref], [Google Scholar], [Publisher]
[11]. Han D., Lee J.H., Rizvi A., Gransar H., Baskaran L., Schulman-Marcus J., ó Hartaigh B., Lin F.Y., Min J.K., Incremental role of resting myocardial computed tomography perfusion for predicting physiologically significant coronary artery disease: a machine learning approach, Journal of Nuclear Cardiology, 2018, 25:223 [Crossref], [Google Scholar], [Publisher]
[12]. Coenen A., Kim Y.H., Kruk M., Tesche C., De Geer J., Kurata A., Lubbers M.L., Daemen J., Itu L., Rapaka S., Sharma P., Diagnostic accuracy of a machine-learning approach to coronary computed tomographic angiography–based fractional flow reserve: result from the MACHINE consortium, Circulation: Cardiovascular Imaging, 2018, 11:e007217 [Crossref], [Google Scholar], [Publisher]
[13]. Tabassian M., Sunderji I., Erdei T., Sanchez-Martinez S., Degiovanni A., Marino P., Fraser A.G., D'hooge J., Diagnosis of heart failure with preserved ejection fraction: machine learning of spatiotemporal variations in left ventricular deformation, Journal of the American society of echocardiography, 2018, 31:1272 [Crossref], [Google Scholar], [Publisher]
[14]. Saha A., Harowicz M.R., Mazurowski M.A., Breast cancer MRI radiomics: An overview of algorithmic features and impact of inter‐reader variability in annotating tumors, Medical physics, 2018, 45:3076 [Crossref], [Google Scholar], [Publisher]
[15]. Baeßler B., Weiss K., Dos Santos D.P., Robustness and reproducibility of radiomics in magnetic resonance imaging: a phantom study, Investigative radiology, 2019, 54:221 [Crossref], [Google Scholar], [Publisher]
[16]. Snaauw G., Gong D., Maicas G., Van Den Hengel A., Niessen W.J., Verjans J., Carneiro G., End-to-end diagnosis and segmentation learning from cardiac magnetic resonance imaging, In 2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019) (pp. 802-805). IEEE, 2019 [Crossref], [Google Scholar], [Publisher]
[17]. Neisius U., Myerson L., Fahmy A.S., Nakamori S., El-Rewaidy H., Joshi G., Duan C., Manning W.J., Nezafat R., Cardiovascular magnetic resonance feature tracking strain analysis for discrimination between hypertensive heart disease and hypertrophic cardiomyopathy, PloS one, 2019, 14:e0221061 [Crossref], [Google Scholar], [Publisher]
[18]. Samala R.K., Chan H.P., Hadjiiski L., Paramagul C., Helvie M.A., Neal C.H., Homogenization of breast MRI across imaging centers and feature analysis using unsupervised deep embedding, In Medical Imaging 2019: Computer-Aided Diagnosis (Vol. 10950, pp. 510-516). SPIE, 2019 [Crossref], [Google Scholar], [Publisher]
[19]. Papandrianos N.I., Feleki A., Moustakidis S., Papageorgiou E.I., Apostolopoulos I.D., Apostolopoulos D.J., An explainable classification method of SPECT myocardial perfusion images in nuclear cardiology using deep learning and grad-CAM, Applied Sciences, 2022, 12:7592 [Crossref], [Google Scholar], [Publisher]
[20]. Nakajima K., Maruyama K., Nuclear Cardiology Data Analyzed Using Machine Learning, Annals of Nuclear Cardiology, 2022, 8:80 [Crossref], [Google Scholar], [Publisher]
[21]. Miller R.J., Kwiecinski J., Dey D., Slomka P.J., Artificial Intelligence/Machine Learning in Nuclear Medicine and Hybrid Imaging, In Artificial Intelligence/Machine Learning in Nuclear Medicine and Hybrid Imaging (pp. 137-156). Cham: Springer International Publishing, 2022 [Crossref], [Google Scholar], [Publisher]
[22]. Koulaouzidis G., Jadczyk T., Iakovidis D.K., Koulaouzidis A., Bisnaire M., Charisopoulou D., Artificial intelligence in cardiology—a narrative review of current status, Journal of Clinical Medicine, 2022, 11:3910 [Crossref], [Google Scholar], [Publisher]
[23]. Bhadri K., Karnik N., Dhatrak P., Current advancements in cardiovascular disease management using artificial intelligence and machine learning models: Current scenario and challenges, In 2022 10th International Conference on Emerging Trends in Engineering and Technology-Signal and Information Processing (ICETET-SIP-22) (pp. 1-6), IEEE, 2022 [Crossref], [Google Scholar], [Publisher]
[24]. Zhu F., Wang G., Zhao C., Malhotra S., Zhao M., He Z., Shi J., Jiang Z., Zhou W., Automatic reorientation by deep learning to generate short-axis SPECT myocardial perfusion images, Journal of Nuclear Cardiology, 2023, 1-11. [Crossref], [Google Scholar], [Publisher]
[25]. Nye J.A., Applying deep learning attenuation correction in the presence of motion, Journal of Nuclear Cardiology, 2022, 1-2. [Crossref], [Google Scholar], [Publisher]
[26]. Xie H., Thorn S., Chen X., Zhou B., Liu H., Liu Z., Lee S., Wang G., Liu Y.H., Sinusas A.J., Liu C., Increasing angular sampling through deep learning for stationary cardiac SPECT image reconstruction, Journal of Nuclear Cardiology, 2023, 30:86 [Crossref], [Google Scholar], [Publisher]
[27]. van der Bijl P., Stassen J., Bax J.J., Application of a deep learning algorithm to calcium scoring in myocardial perfusion imaging, Journal of Nuclear Cardiology, 2022, 30:321 [Crossref], [Google Scholar], [Publisher]
[28]. Liu J., Yang Y., Wernick M.N., Pretorius P.H. King M.A., October. Dose-Blind Denoising With Deep Learning in Cardiac Spect, In 2022 IEEE International Conference on Image Processing (ICIP) (pp. 1666-1670), IEEE, 2022, [Crossref], [Google Scholar], [Publisher]
[29]. Sun J., Jiang H., Du Y., Li C.Y., Wu T.H., Liu Y.H., Yang B.H., Mok G.S., Deep learning-based denoising in projection-domain and reconstruction-domain for low-dose myocardial perfusion SPECT, Journal of Nuclear Cardiology, 2022, 1-16. [Crossref], [Google Scholar], [Publisher]
[30]. Kikuchi A., Wada N., Kawakami T., Nakajima, K., Yoneyama H., A myocardial extraction method using deep learning for 99mTc myocardial perfusion SPECT images: A basic study to reduce the effects of extra-myocardial activity, Computers in Biology and Medicine, 2022, 141:105164. [Crossref], [Google Scholar], [Publisher]
[31]. van Dalen J.A., Koenders S.S., Metselaar R.J., Vendel B.N., Slotman D.J., Mouden M., Slump C.H., van Dijk J.D., Machine learning based model to diagnose obstructive coronary artery disease using calcium scoring, PET imaging, and clinical data, Journal of Nuclear Cardiology, 2023, 1-10. [Crossref], [Google Scholar], [Publisher]