top of page
Search
Writer's pictureJessica Morley

Foundation models are exciting, but they must not disrupt the foundations of caring

A few weeks ago, I got a bit frustrated by the narrative surrounding the use of Large Language Models (or the more general Foundation Models) being used to 'replace clinicians' given their apparent ability to 'pass' medical exams. I wrote a brief thread (see below) about it and then turned this into a more fleshed out argument in this pre-print.


Original Thread:


It’s only been a few weeks, but I’m already SO TIRED of the idea that LLMs are somehow tech’s gift to medicine. There’s more to good medicine than parroting back answers (not to mention LLMs are unregulated and generative). A thread of some of my fave papers about why medicine is more than Q&A.


First, go back and read the actual definition of EBM. It's not about regurgitating medical studies. It is about contextualising the best available evidence for the specific patient: 10.1136/bmj.312.7023.71


Next, read more about the problems of assuming that medicine and EBM is purely about quantitative data in this paper: 10.1136/bmj.g3725


Let’s not forget that turning everything into a ‘medical device’ or something that participates in the medical gaze is a feature of surveillance capitalism: 10.1016/j.socscimed.2023.115810


What about the idea that providing individuals with more and more data and information about their health and how to manage it is somehow empowering? WRONG. See: 10.1007/s11948-019-00115-1 & https://lnkd.in/ev-YcKUu


Now let’s think about the Fiduciary responsibilities, duty of care, and ‘do no harm’ ethics. LLMs have no semantic understanding. There's a reason why having a ‘learned intermediary’ is useful see e.g., 10.1007/s11948-015-9652-2 or 10.17705/1jais.00261 or 10.1007/s10730-019-09377-5


Nudges & paternalism? We don’t know how LLMs were trained, we don’t know why they might present some recommended treatment options in a particular order. We do know about ‘automation bias’ hmmm: https://lnkd.in/e8xbWJ5P


Remember illness is actually a ‘normative’ concept. It’s not just quantitative. Society decides (& LLMs learn) what does and does not count as illness, sometimes with hugely unethical implications: 10.1007/s11017-016-9379-3


Information systems, when poorly implemented, rushed, or not fully considered can result in actual patient safety issues: 10.1093/jamia/ocw154


More insight into complexity: https://lnkd.in/eSktKXAG


We *know* that human emotions/behaviours like empathy matter enormously when providing good care even if we don’t really understand why: 10.2471/BLT.19.237198


Does this all mean that I’m anti-LLMs for healthcare? No, absolutely not. I believe we have an ethical imperative to explore all technologies that have potential to improve healthcare and save lives.


I also don’t think that stopping development until we know all the ethical implications is the answer e.g., 10.1111/risa.13265.


I do believe is we should be reflexive, mindful, and sceptical. Let’s not get carried away with hyperbole and a false belief in magic bullets and omnipotent algorithms. See: 10.3399/bjgp13X670741


Let’s think carefully about what we collectively want from medicine and healthcare and design new technologies to meet these aims rather than jumping on the bandwagon of every new shiny thing: 10.1007/s11569-015-0229-y, 10.1016/j.respol.2017.09.012, 10.48550/ARXIV.2011.13170


Ok rant over.

118 views0 comments

Recent Posts

See All

Comments


bottom of page