Artificial intelligence is here to stay and it has the potential to impact the healthcare industry and other sectors of the economy.
In this article, we will explore AI and its use cases, benefits and applications in healthcare. We shall also look at how countries and organizations can implement AI healthcare solutions in order to serve patients better.
In general, AI refers to the simulation of human intelligence by machines. In its National AI Strategy [1], the UK government gives a working general definition of AI as, “machines that perform tasks normally performed by human intelligence, especially when the machines learn from data how to do those tasks”.
AI comprises a constantly evolving collection of capabilities as new technologies emerge. In fact, it is a common saying among AI researchers and practitioners that once robots have completed a task that humans could previously perform, it is no longer regarded a sign of 'intelligence' or AI.
AI uses in healthcare can be categorized by solution type or use case. Some of the key AI use cases currently in healthcare are explained below:
Healthcare Organization and Administration
Repetitive administrative tasks like taking notes and transcribing during consultations, filing an individual's electronic patient health record, writing, printing, and posting patient letters, and analyzing patient feedback to assist with quality improvement can all be automated and optimized by AI functions like natural language processing.
For example, in Hong Kong, the HK Health Authority is using an AI-based tool to produce monthly or weekly nursing staff rosters that satisfy a set of constraints, such as staff availability, preferences, working hours, ward operational requirements, and hospital regulations. It has been deployed across 40 public hospitals and is responsible for 4,000 staff schedules. At Imperial College Healthcare NHS Trust, a pilot tested the use of Natural Language Processing to analyze patient feedback in real-time, which led to responses to feedback being implemented more quickly than without the tool.
Clinical Decision Making
AI can be used to optimize and personalize decisions about triage, diagnostics, prognosis, and care pathways at the point of care. This entails recognizing patterns in CT, MRI, and ultrasound scans, analyzing clinical data, genomic data, health records, personal and family histories, speech patterns, clinical guidelines, best practices, and medical research.
This is already being implemented at Moorfields eye hospital where they have trialed the use of optical coherence tomography (a non-invasive diagnostic technique that renders a cross-sectional view of the retina) to pick up retinal diseases through AI tagging of ‘urgent’ cases in need of referral. Also, IBM’s Watson can parse millions of pages of medical literature in seconds and generate diagnostic insights based on a patient’s symptoms.
In another use case, Ethos is a tool using AI to target radiotherapy in cancer treatment – currently in use at the Beatson West of Scotland Cancer Center. Traditionally, clinicians must draw up and continuously adjust treatment plans as tumors and surrounding tissue typically change as the disease and treatment progresses; this tool aids clinicians to make these decisions more quickly and effectively
Service Delivery
AI-powered applications, bots, personal, wearable, and smart devices can be used to connect directly with patients, delivering therapies, providing health information, and/or assisting patients in sticking to prescribed interventions and managing health issues.
Computerized Cognitive Behavioral therapy (CBT), for example, has a relatively long history in the NHS [1], but a new generation of digital therapies aims to deliver CBT at scale with better engagement. Sleepio is one example: a six-week tailored program delivered online that is designed to treat insomnia. The therapy is personalized using AI that tailors the intervention to patient data.
Population Health
Population-level AI analysis using a massive volume of new data forms for unique, real-time insights on epidemics, disease spread, and sickness drivers; and individuals/groups at risk of developing specific diseases who could benefit from proactive, early intervention.
For example, during the pandemic, NHSX developed the Covid-19 Data Store, which gathered data from various sources across the health and social care system as part of an effort to use AI to create a predictive model to guide the government's response to Covid-19.
Bio-medical Research
AI analysis is being applied to new types of data, such as genomic data and patient information, to help discover new drugs and treatments.
Recently, MIT researchers discovered a new class of compounds that can kill drug-resistant bacteria. They used AI to screen millions of compounds and select those with high predicted ability to kill bacteria and low toxicity to living tissue. This resulted in several hundred new compounds worth testing. Several of these were empirically tested and shown to be capable of killing drug-resistant bacteria.
In order to have a seamless adoption and integration of AI in healthcare, the following guiding principles must be adhered to:
AI Must Be Robustly Assessed for Safety and Efficacy in Clinical Settings
The present evidence for AI healthcare tools is generally insufficient. Robust research have shown that AI models can outperform human clinicians in some isolated healthcare activities, but several systematic reviews have highlighted the limitations of the current evidence base for overall AI performance.
One such review describes a ‘paucity of robust evidence’ for claims for the benefits of AI in advancing clinical outcomes, ‘where [there are] only a handful of RCTs comparing AI-assisted tools with standard-of-care management in various medical conditions’.
A large part of the problem is that much of the literature focuses on technical evaluations in a lab setting, rather than evaluation of clinical efficacy in the ‘real world’ - including how these technologies impact patient care in actual practice. As the Oxford Internet Institute note, ‘it is important to remember that building an accurate or high-performing AI model and writing about it in an academic publication is not the same as building an AI model that is ready for deployment in a clinical system. Moving from ‘the lab’ to ‘the clinic’ is a key part of the transition and yet very few AI models have successfully made the leap across this ‘chasm’.
Governance and Regulation to Protect Patient Safety is Vital
To ensure safety and guarantee that AI is trusted—and, eventually, utilized—by both employees and patients, governance, including legislation, is essential. The Medicines and Health Products Regulatory Agency (MHRA) oversees AI regulation, which is a reserved topic, meaning it is decided at the UK level. Following the UK's exit from the EU, medical device regulation in the country—including AI as medical devices—is presently undergoing a transitional phase [1].
UNESCO has guidance and principles for the use of AI in general and for the use of big data in health. UNESCO’s work on the ethical implications of AI is supported by two standing expert committees, the World Commission on the Ethics of Scientific Knowledge and Technology and the International Bioethics Committee [2].
Also, bioethics laws and policies play a role in regulating the use of AI, and several bioethics laws have been revised in recent years to include recognition of the growing use of AI in science, health care and medicine.
The French Government’s most recent revision of its national bioethics law, which was endorsed in 2019, establishes standards to address the rapid growth of digital technologies in the health-care system. It includes standards for human supervision, or human warranty, that require evaluation by patients and clinicians at critical points in the development and deployment of AI. It also supports free, informed consent for the use of data and the creation of a secure national platform for the collection and processing of health data [2].
Staff and Patient Involvement Throughout the Development and Implementation Process is Necessary
As end users of these technologies, employees, patients, and the general public must be involved and supportive for AI adoption to be successful. Without this, there is a risk that the incorrect technologies will be adopted, that adopted technologies won't be embraced and deployed, or that badly implemented technologies will be employed.
When achieving buy-in, it is important to take into account vulnerable groups in society. If marginalized communities are not properly taken into account, there is a risk that mistrust will persist among them in healthcare systems, which will exacerbate health disparities. Like any health messaging, public communication around AI needs to be properly audience-specific.
Staff Must Be Trained on New Technologies (initially and continuously) and They Must be Integrated into Workflows
In order for the health system to implement AI technologies safely and effectively, staff members must be familiar with both the fundamentals of the technology and how to use it securely. In the absence of this, and the time to critically examine their outputs, there is a risk that their capacity to supervise these technologies will be compromised, and the likelihood of automation bias will increase. According to the 2019 Topol Review [1] , healthcare workers' digital literacy will need to rise, and effective implementation depends on training and retraining.
Existing IT Infrastructure and Data Must be Improved
The successful implementation of AI is dependent on existing technology maturity and substantial, high-quality training datasets. However, the quality of NHS IT infrastructure is notoriously inadequate, as illustrated by a BMA article published in 2022 [1]. Capturing and sharing high-quality data can be nearly hard when patient data is absorbed from tens or hundreds of diverse sources in a single trust and stored on systems with little to no interoperability.
Legal Liability Must Be Clarified
The growing urge to apply AI in healthcare raises a variety of difficult legal issues and concerns. One area of special concern is the assignment of legal duty in cases when patient safety has been compromised (and AI is used to provide care and therapy). When determining liability, a 'legal personality' must bear legal responsibility. Simply put, an AI system's inventors, suppliers, and users will bear accountability for its actions and omissions.
Existing laws state that a doctor treating a patient has a legal obligation to give reasonable care. If care falls below that reasonable level and causes harm, the patient may file a claim for damages. If a doctor misuses equipment (for example, negligently misreading a scan), the doctor is held accountable if the inaccuracy results in injury. Similarly, an employer may be held vicariously accountable for the actions of their employees. If AI is to be fully adopted in healthcare, the issue of who bears legal responsibility must be clearly defined.
If all of the above guiding principles are carefully followed, AI applications in healthcare will become commonplace in a very short period of time and healthcare delivery for patients will be drastically improved.
Sources and References
[1] BMA Principles for Artificial Intelligence and its Application in Healthcare
[2] WHO Ethics and Governance of Artificial Intelligence for Healthcare
Share This Article: