Artificial intelligence in cardiology: applications, benefits and challenges

Br J Cardiol 2018;25:86–7doi:10.5837/bjc.2018.024 Leave a comment
Click any image to enlarge
Authors:

The introduction of such digital technologies as robotic implants, home monitoring devices, wearable sensors and mobile apps in healthcare have produced significant amounts of data, which need to be interpreted and operationalised by physicians and healthcare systems across disparate fields.1 Most often, such technologies are implemented at the patient level, with patients becoming their own producers and consumers of personal data, something which leads to them demanding more personalised care.2

This digital transformation has led to a move away from a ‘top-down’ data management strategy, “which entailed either manual entry of data with its inherent limitations of accuracy and completeness, followed by data analysis with relatively basic statistical tools… and often without definitive answers to the clinical questions posited”.3 We are now in an era of a ‘bottom-up’ data management strategy that involves real-time data extraction from various sources (including apps, wearables, hospital systems, etc.), transformation of that data into a uniform format, and loading of the data into an analytical system for final analysis.3

The challenges

All these data, however, pose a serious challenge for physicians: the challenge of limitless choice. According to a white paper by Stanford Medicine,4 “the sheer volume of health care data is growing at an astronomical rate: 153 exabytes (one exabyte = one billion gigabytes) were produced in 2013 and an estimated 2,314 exabytes will be produced in 2020, translating to an overall rate of increase at least 48 percent annually.” With so much data on the daily decisions of millions of patients about their physical activity, dietary intake, medication adherence, and self-monitoring (e.g. blood pressure, weight), to name but a few, physicians are at a loss as to which data to focus on, to search for what, and for which desired outcome?

Increased data storage, high computing power and exponential learning capabilities together enable computers to learn much faster than humans and address the challenge of limitless choice. Artificial intelligence (AI) is the development of intelligent systems, capable of taking “the best possible action in a given situation”.5 To develop such intelligent systems, machine learning algorithms are required to enable dynamic learning capabilities in relation to changing conditions. Machine learning takes many different forms and is associated with many different schools of thought, including philosophy, psychology, and logic (with learning algorithms based on inverse deduction), neuroscience and physics (with learning algorithms based on backpropagation), genetics and evolutionary biology (with learning algorithms based on genetic programming), statistics (with learning algorithms based on Bayesian inference) and mathematical optimisation (with learning algorithms based on support vector machine).6 Each of these schools of thought can apply their learning algorithms for different problems. However, none of these algorithms are perfect in solving all possible problems, and none have reached a level of ‘superintelligence’7 that will be able to predict, diagnose and give recommendations for treating complex medical conditions. Still, when competently combined – and provided they are fed the appropriate data to learn from – these algorithms can generate what has been called a ‘master algorithm’, which could potentially solve much more complex problems than humans can.6

Positive impacts

Machine learning can positively impact cardiovascular disease prediction and diagnosis by developing algorithms that can model representations of data, much faster and more efficiently than physicians can. For example, currently, a physician who wishes to predict the readmission of a patient with congestive heart failure needs to screen a large but unstructured electronic health record (EHR) dataset, which includes variables such as the International Classification of Diseases (ICD) billing codes, medication prescriptions, laboratory values, physiological measurements, imaging studies, and encounter notes. Such a dataset makes it extremely difficult to decide a priori which variables should be included in a predictive model and what type of methods should be applied in the model itself.8

Such predictive models can be produced with ‘supervised learning’ algorithms that require a dataset with predictor variables and labelled outcomes.8 For example, a recent study investigated the predictive value of a machine-learning algorithm that “incorporates speckle-tracking echocardiographic data for automated discrimination of hypertrophic cardiomyopathy (HCM) from physiological hypertrophy seen in athletes”.9 The study’s results showed a positive impact of machine-learning algorithms in assisting in “the discrimination of physiological versus pathological patterns of hypertrophic remodelling… for automated interpretation of echocardiographic images, which may help novice readers with limited experience”.9

A separate set of algorithms used in cardiology are called ‘unsupervised learning’ algorithms, which focus on discovering hidden structures in a dataset by exploring relationships between different variables.8 For example, one study investigated the use of such learning algorithms to identify temporal relations among events in EHR; these temporal relations were then examined to assess whether they improved model performance in predicting initial diagnosis of heart failure.10 Thus, results from unsupervised learning algorithms can feed into supervised learning algorithms for predictive modelling.

A third set of algorithms are reinforcement learning algorithms, which “learn behavior through trial and error given only input data and an outcome to optimize”.8 Designing dynamic treatment regimens, such as managing the rates of re-intubation and regulating physiological stability in intensive care units, is one area where the application of reinforcement learning algorithms may hold great potential.11

Potential negative impacts

Evidently, the potential benefits of AI in cardiology are enormous. However, such benefits are not without challenges. First, there are clear benefits for improving work productivity. There are currently fewer physicians to care for an ever-increasing ageing population.12 AI can support, rather than replace physicians, generating time- and cost-saving benefits for them and their patients, and enabling more compassionate and thorough interactions. However, as more tasks become automated, there are possibilities that fewer physicians will be required to work, or that fewer will do so on a full-time basis, since many tasks could be delivered through platforms by part-time, freelancer physicians. This may impact the relationship between patients, physicians and administrative staff in healthcare systems.13

Second, as discussed earlier, machine-learning algorithms can scan through larger volumes of health data enabling faster identification of predictive, diagnostic, as well as treatment options for different cardiovascular diseases. This feeds into the current demand for more personalised care. At the same time, however, many patients now express the need for more transparency about the types of data shared, who it is used by and for what purpose. With the General Data Protection Regulation (GDPR) now in full force across Europe, there are important implications for the security and privacy of data that machine-learning algorithms need to keep evolving. The recent scandal involving Google DeepMind and the Royal Free London NHS Foundation Trust, which led to the transfer of identifiable patient records across the entire Trust without explicit consent,14 is a case to be avoided. The architecture of the digital infrastructure supporting AI and machine learning across different localities and between applications and platforms needs to be carefully designed,15 in order to maintain the security and privacy of healthcare data.

Beyond the issue of seeking consent before any access and use of data, there are also issues around the transparency of algorithmic objectives and outcomes (how do algorithms work and to what end) and of the accountability for the potential misuse of data. As a recent report has pointed out, informed consent by all possible patients may not always be possible because of the way data are shared across platforms and for different purposes; algorithmic transparency, even though sought for, may be difficult to achieve because of the dynamic learning and evolution of algorithms; and accountability for data use may raise challenging ethical questions if in the end such data use leads to improved patient outcomes.5 What matters the most is the clinical efficacy of algorithms and their use of data.5

Finally, although both AI and physicians can make errors in their clinical judgment, either because of not having seen a particular case before or because of bad training, in combining the two – AI and human expertise – the number of clinical errors can be reduced. In this context, there are opportunities for revisiting the training of individual physicians, as well as multi-disciplinary teams, to learn to interact with AI. We believe this is of paramount importance and new policies should be developed towards an improved and enhanced training of physicians, which will also enable more effective and efficient clinical judgment.

Conclusion

In conclusion, it is important that we avoid placing ‘exaggerated hope’ on the potential impact of AI, but also not to fall victims of ‘exaggerated fear’ because we cannot identify with the technology.16 “The real dangers of AI are no different from those of other artifacts in our culture: from factories to advertising, weapons to political systems. The danger of these systems is the potential for misuse, either through carelessness or malevolence, by the people who control them.”16 The possibilities of improving clinical efficacy and healthcare outcomes through AI are enormous, but we need to be aware of the associated risks and challenges and try to minimise those through multi-disciplinary research, and renewed legal and ethical policies.

Conflict of interest

None declared.

References

1. Free C, Phillips G, Watson L et al. The effectiveness of mobile-health technologies to improve health care service delivery processes: a systematic review and meta-analysis. PLoS Medicine 2013;10:e1001363. https://doi.org/10.1371/journal.pmed.1001363

2. Kirchhof P, Sipido KR, Cowie MR, Eschenhagen T, Fox KA, Katus H. The continuum of personalized cardiovascular medicine: a position paper of the European Society of Cardiology. Eur Heart J 2014;35:3250–7. https://doi.org/10.1093/eurheartj/ehu312

3. Chang AC. Big data in medicine: the upcoming artificial intelligence. Prog Pediatr Cardiol 2016;43:91–4. https://doi.org/10.1016/j.ppedcard.2016.08.021

4. Stanford Medicine. Health trends report: harnessing the power of data in health. Stanford CA: Stanford Medicine, 2017. Available from: https://med.stanford.edu/content/dam/sm/sm-news/documents/StanfordMedicineHealthTrendsWhitePaper2017.pdf

5. Future Advocacy. Ethical, social, and political challenges of artificial intelligence in health. London: Future Advocacy, 2018. Available from: http://futureadvocacy.com/ai-think-tank/#ai-publications

6. Domingos P. The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World. New York: Basic Books, 2015.

7. Bostrom N. Superintelligence: Paths, Dangers and Strategies. Oxford: Oxford University Press, 2016.

8. Johnson KW, Soto JT, Glicksberg BS et al. Artificial intelligence in cardiology. J Am Coll Cardiol 2018;71:2668–79. https://doi.org/10.1016/j.jacc.2018.03.521

9. Narula S, Shameer K, Omar AMS, Dudley JT, Sengupta PP. Machine-learning algorithms to automate morphological and functional assessments in 2D echocardiography. J Am Coll Cardiol 2016;68:2287–95. https://doi.org/10.1016/j.jacc.2016.08.062

10. Choi E, Schuetz A, Stewart WF, Sun J. Using recurrent neural network models for early detection of heart failure onset. J Am Med Inform Assoc 2016;24:361–70. https://doi.org/10.1093/jamia/ocw112

11. Weng SF, Reps J, Kai J, Garibaldi JM, Qureshi N. Can machine-learning improve cardiovascular risk prediction using routine clinical data? PloS One 2017;12:e0174944. https://doi.org/10.1371/journal.pone.0174944

12. World Health Organization. Global strategy on human resources for health: workforce 2030. Geneva: WHO, 2016. Available from: http://www.who.int/hrh/resources/pub_globstrathrh-2030/en/

13. Taylor M, Marsh G, Nicol D, Broadbent P. Good work: the Taylor review of modern working practices. London: Royal Society of Arts, 2017. Available from: https://www.gov.uk/government/publications/good-work-the-taylor-review-of-modern-working-practices

14. Powles J, Hodson H. Google DeepMind and healthcare in an age of algorithms. Health Technol 2017;7:351–67. https://doi.org/10.1007/s12553-017-0179-1

15. Constantinides P, Henfridsson O, Parker GG. Platforms and infrastructures in the digital age. Inf Syst Res 2018;online first. https://doi.org/10.1287/isre.2018.0794

16. Bryson JJ, Kime PP, Zürich C. Just an artifact: why machines are perceived as moral agents. IJCAI (US) 2011;22:1641–6. Available from: http://www.cs.bath.ac.uk/~jjb/ftp/BrysonKime-IJCAI11.pdf

THERE ARE CURRENTLY NO COMMENTS FOR THIS ARTICLE - LEAVE A COMMENT