There is currently an ENORMOUS amount of hype surrounding the use of AI for healthcare - both in the UK and abroad. Whilst it is reasonable to be excited about the potential benefits of AI, it is also important to note that getting the implementation 'right' is more challenging than it might seem for multiple reasons. A brief outline of the implementation challenges is provided below, or you can read my full guide to thinking critically about AI for healthcare here.
First: Data.NHS data is often presented as being a rich resource ready for mining for valuable insights and for training models. But this skims over the fact that getting any health data (including NHS data) ‘research ready’ requires a MASSIVE AMOUNT OF WORK. EHR data is not always well or consistently structured, it does not contain regularly occurring events, and is often plagued with missing values. It’s held in non-integrated siloes, in different formats, with different and inconsistent access rules. Providing access to the volumes of well-structured, well-curated data required to train high-performing & accurate models is v. challenging esp given the imperative of maintaining patient privacy. Privacy-preserving ‘solutions’ like the use of TREs may not work for AI/ML & outputting models is also likely to raise privacy concerns.
NEXT there's the significant challenge of integrating models into legacy clinical systems. These systems are rarely sufficiently robust, reliable, or flexible to support the operation of complex models.
THEN there’s the need to make models fit within existing clinical workflows without causing additional burden or encouraging dangerous workarounds. These issues are already well-known with basic ‘pop-ups’ e.g., alert fatigue. They multiply with AI. Being able to ‘diagnose’ or screen 000s of people at once doesn't increase the system’s ability to treat them. It’s not clear whether this is beneficial from either a cost-efficiency or a life-saving perspective. Plus there’s the fact that many AI systems are billed as CDSS but might not accurately replicate the steps involved in clinical decision making & may ADD to cognitive overload.
Finally, there’s a skills mix issue. Software devs need an understanding of the healthcare system, biology, & medicine. Clinicians need an understanding of how models are developed & how they generate results. Without this knowledge, clinicians may be vulnerable to automation bias, and to misinterpreting results in a way that could result in harm to patients. Software devs might make models fraught with epistemic errors or unhelpful constraints. What’s more, if models are to be capable of reflecting the true complexity of medicine, then they need to be developed by a diverse workforce capable of understanding multiple different ‘lived experiences’ of health and healthcare and how these differ depending on demographic and socioeconomic factors as well as their interactions.
This does not mean AI/ML will not have a role to play in the future of healthcare. That is somewhat inevitable at this point and it does have potential for positive impact. BUT we must pay equal attention to the foundations and not just to foundation models.
Comments