top of page
Search
  • Writer's pictureJessica Morley

Rebranding 21st Century Healthcare

I've promised for a while now that I would share an overview of my postdoctoral research programme, so here it is! "From Personalised Unwellness to Algorithmically Enhanced Public Health."


In brief, I am trying to develop a theory of what 21st century healthcare currently looks like, rhetorically and in practice, and what I think it should look like from a pro-ethical design perspective.


This involves:


  1. Explaining why the current drive to encourage healthcare systems across the globe to adopt digital health and AI technologies relies heavily on the rhetoric of P4 medicine and the consequences of this. Specifically, the consequences of handing over significant power and influence over healthcare system design to the 'technical' epistemic community - those responsible for developing digital health and AI tools - who (a) assume the adoption of these technologies is no more complicated than shifting from paper to electronic health records; (b) push the narrative that regulation is negative by default as it stifles innovation; and (c) encourage the development of very technologically deterministic attitude.

  2. Highlighting the fact that digital or AI-enabled healthcare is not "just" data-driven. Medicine has always been data-driven. Increased reliance on digital and AI tools is instead, resulting in:

    1. A move away from a 20th century model of care that was focused on evidence-based and patient-centric care, and reliant on 1:1 relationships and narrow trust; and

    2. Towards a 21st century model of care that is algorithm-based, digital twin centric, and reliant on many:many relationships and distributed trust

  3. Understanding how this transition is disrupting existing models of Governance including legal, ethical, and social models of Governance.

  4. Developing the concepts of:

    1. Personalised unwellness to explain the ethical consequences of the rhetorical push towards p4 or 'personalised' medicine. Specifically, the consequences of, outsourcing knowledge about the body to algorithms; disrupting the fundamentals of care; assuming that digital tools will enable universal coverage and that this will result in equitable access to care, and how this is resulting in:

      1. A two-tiered system divided between the worried well and the ignored sick

      2. An expansion of the definition of healthy, so that it no longer simply means 'absence of illness'

      3. A shift in responsibility for maintaining (and improving) public health from the state/healthcare institutions to individuals.

    2. The Inverse Data Quality Law. Building on the Inverse Care Law developed by Hart in 1971 to highlight the drivers and consequences of a situation in which "the availability of high quality medical or social care data will vary inversely with the need of the population served."

  5. Arguing that we need to rebrand, re-design, or re-frame digital/AI-enabled 21st century healthcare by:

    1. Recognising that the infosphere is a social determinant of health. Specifically, acknowledging the fact that information recorded about individuals, information produced by individuals, information generated about individuals, and information consumed by individuals (and variation in the quality of this information) has a direct influence on the health of individuals - and, as a consequence, - the population.

    2. Recognizing that digital health is public health not personalised health, and many of the harms arise at the group, sectoral, and societal level rather than the individual level.

    3. Focusing the technical epistemic community's efforts on information needs rather than wants. Specifically, we should be focusing on developing digital/AI tools that provide information for a specific, definable and measurable purpose, and only in situations where we have evidence that the provision of the information required to fulfill this purpose will result in positive outcomes. In other words, we should stop developing AI for the sake of AI. Innovation is not net positive by default.

  6. Enabling this rebranding by:

    1. Generating evidence of the harms the current 21st century model is resulting in, and why this is happening. For example, auditing the evidence available to support the efficacy of FDA/MHRA approved AI/ML/LLM derived tools.

    2. Developing theory to explain why it is happening through conceptual models, frameworks. For example, developing the conceptual model for the successful implementation of algorithmic clinical decision support into the NHS.

    3. Piloting practical solutions to move between the what and the how. For example, the checklist for responsible MLOps.


That's it! If you are keen to know more, then you can watch me explain the ideas in more detail in the below video, the transcript and slides for which are underneath the video:



Hi, if you don't know me, I'm Jess. I'm a researcher at the Digital Ethics Centre at Yale University. Prior to being here, I was a researcher at the University of Oxford, where I did my BA, MSc and PhD, where I worked at the Bennett Institute for Applied Data Science. And today, I'm going to give you an overview of my postdoctoral research programme, from personalised unwellness to algorithmically enhanced public health.


Roughly what this talk will cover, I'll give a bit of background about why healthcare systems are currently struggling so much, the rhetorical solutions that are being developed for these crises how this is transitioning the framing of healthcare from a 20th century model which was very much focused on patient centric, evidence based care to a 21st century model of healthcare, which is very much focused on an algorithmically centred model of care. I'll describe why this resulting in a system of personalised unwellness and why I think we need to rebrand the way in which digital health is being sold from a rhetorical perspective and I'll talk a little bit about how we go about doing that from a research, policy and a practical perspective.


 Let's get started with a little bit of background.


 

I would imagine that most people who are watching me give this talk would be aware that healthcare across the globe is currently in crisis. We have more complex patients. This is partly because patients are a lot older, but there's also an enormous amount of multi morbidity.


And so, people are living longer with more conditions and therefore they are more complex to manage. There are also more complex treatments, we have far more complex technologies involved in care, and all of this is resulting in skyrocketing costs, but unfortunately for health care systems, we are seeing a pattern of diminishing return the increase in investment is resulting in poorer outcomes. The clearest way we can see this is that for the first time in a very long time, in places like the UK and the US, life expectancy is declining. Of course, some of this is partly to do with the impact of COVID, but COVID doesn't explain the whole overarching pattern.


There's a lot of rhetoric surrounding how we might solve these crises. In particular these two ideas, P4 Medicine and the Triple Aim.


To give a little bit more detail, P4 Medicine is this idea that we can monitor everything to do with a person's healthcare from, using wearables using electronic health records to monitoring what they buy in the supermarkets so called digital phenotyping in order to identify what risk factors that person has with developing particular conditions and therefore predict what likelihood There is that they will become unwell at some point in the future.

This allows the health care system in theory to intervene earlier and therefore to prevent. It's participatory, because of the fact that people have to participate more actively in their own health care than they have in the past, primarily through generating data about themselves. And then the personalized bit comes from the element of treatment, so rather than the one size fits all approach to treating particular conditions, we can target treatments to that specific patient by this analysis of every single aspect of their person, whether that be their genetics, whether it be their environment, et cetera, et cetera, and thus you have this overarching idea that if you can intervene earlier enabled by prediction and then target treatment, you will make care that is more, cost effective and more efficient.


And that in itself will therefore enable this so-called triple aim, which is the idea that you can simultaneously improve population health, the experience of care, whilst reducing per capita cost. In my view, this is a magic. This is wishful thinking. It's much more likely that we will see a wonky triangle rather than a perfect triangle. So, you might see a reduction in per capita cost, but a reduction also in the population's health, and in the experience of care.


So what is driving this narrative of solving the current global health care crisis through P4 medicine? first of all, what is happening is the policymakers are signaling to the tech market that there is a need for more efficient, cost effective care.


And as a result, because of this view that P4 medicine might help achieve the triple Aim they are really incentivizing the adoption of AI. In the UK, for example, the government released a 21 million pound fund to incentivize the NHS is adoption of AI to help with winter pressures. And as a consequence of this signaling from policymakers and global healthcare systems, there has been a significant increase in investment in digital data and AI industry. And as a result, there has been a massive flood of the number of unvalidated, unregulated, un evidenced AI solutions or digital health solutions that have flooded the market. Whether that be clinical decision support, risk stratification algorithms, direct to consumer apps, these things are everywhere. And because they are dominating the conversation so much, the technical epistemic community has gained influence. By this I mean the people who are in charge of developing these apps, of building models, of training algorithms, of deploying software in hospitals.


That is the technical epistemic community. They have gained influence over the development of policy. The problem with this is, this typically assumes that what is happening is first order change. As in, it is a simple transition from analogue to digital. They're assuming it's the same as taking a paper based record and turning it into an electronic health record. And that there's nothing really more complicated involved in that. and then this narrative that regulation stifles innovation. We have again seen this in the UK. It's very clear, but you can see parallels elsewhere. In the UK the language is literally innovation friendly regulation rather than regulation friendly innovation. And this idea that we want innovation at all costs. Innovation is the solution, particularly AI, is the solution to the global healthcare crisis. And therefore we will push it forward no matter what happens. And this is ultimately breeding a very technologically deterministic attitude. This idea that just by making tech available, we will immediately see benefits. You can see this really clearly in a lot of policy language. Again, to use a UK example. In the NHS long term plan, it literally says the words, we will use clinical decision support software to improve patient outcomes. There is no description of how exactly that is going to be achieved. It's just a very deterministic, we will achieve this by doing X.


What is this resulting in? This is ultimately resulting in a new framing of healthcare, or what I'm calling 21st century healthcare.



What's really important to note is that this is not just data driven healthcare. As much as that is the rhetoric that is really commonly used, we see that all the time, data driven technologies, data driven policy. Actually, this is not just about data driven. Medicine in its own way has always been data driven. You can go back and look to Hippocrates himself saying that you must record all of the symptoms of a particular patient. There are very, primitive versions of electronic health records on stone slabs. Medicine has always been data driven. And by describing this transition that is happening, through the introduction of digital medicine and all of those drivers and the push towards P4 medicine that I have just described, we're obfuscating what is really happening: this transition from 20th century model of care to a 21st century model of care.


In 20th century model of care there was a real drive away from paternalism to evidence based medicine. We've had the introduction of Cochrane in the UK. You had people like Sackett pushing this narrative of what is evidence based care. The idea being that you generate high quality evidence of a particular treatment, for example, in an RCT. Provide that evidence to clinicians who contextualize it to their patient at the moment in time when they see it. we make medicine patient centric and we focus on shared decision making. So there's been this real drive in this model of care away from paternalism and part of that was away from an informational imbalance of power between clinicians and patients because Patients had in theory, as much information about their care as the doctor so they could genuinely be involved in decision making.


This was a very one to one relationship. So you had your clinician was monitoring you, you had continuity of care, and there was this one to one relationship. And therefore, there was also a very narrow system of trust. All the elements of trust, whether that be accountability, transparency, consent, shared decision making involvement, all happened in one place. We knew where all of those things were existing. They were existing inside a clinical consultation and we knew how to protect them and we knew how to develop regulations and systems, et cetera, that ensured that narrow definition of trust was maintained. And people felt very much like they could trust in their healthcare provider and that they could trust in their healthcare.


Now, with this drive towards P4 medicine, what we are seeing is this transition towards the 21st century model of care. As opposed to being evidence based, we are now seeing medicine that is becoming algorithm based. That's very different, as much as people might try and pitch this idea that things like algorithmic clinical decision support are, designed to make medicine more evidence based, to make sure clinicians have access to the evidence that they need in real time. It's algorithms ultimately making a decision. It's a move away, from patient centric care towards automated and digital twin centric care. What I mean by that is when we had this one-to-one patient centric, evidence based model in the 20th century, everything was about the patient's physical self, clinicians were always making decisions based on the observations of the patient's physical self.


In an automated and digital twin centric model, decisions are being made by algorithms, whether that be an app, whether it be clinical decision support or whatever, about the patient's digital self, the digital twin of themselves, so how they are represented in data. And that might not directly represent the patient as the patient sees themselves.

And this is ultimately moving clinical care away from this one-to-one relationship to a many to many relationship. We have many more people involved in that clinical context, whether it be those who are developing algorithms, whether it be people who are collecting data. There's a much more complicated pipeline, but there's also a many to many in the sense That these technologies operate at a group level, rather than individual. As much as it might be pitched that this type of technologies enable personalized medicine, what they really enable is group level stratification. It's like making medicine like targeted advertising. I have a whole bunch of patients who fit in Category A, a whole group of patients who fit in Category B. You match Category B therefore I will treat you in the way that has worked for people in Category B in the past. That's really what is happening. And then we're doing that in a, faster, more automated way. And then the other element is this model of trust is completely distributed. So not only does trust now sit in many different places because we have to trust in many more different types of people and many more different types of organizations.


We also see the elements of trust the things that I mentioned, accountability, liability, accuracy. sitting in different parts of the system, they no longer sit with just your clinician, they arise in different parts of the system. And then on top of that, we are also requiring more traditional components of that system to take on new roles. For instance, we are requiring clinicians to take on data work, which they would previously not have been responsible for, and we would previously not have needed to trust them in order to do. And what do I mean by data work? Basically, the management of electronic health records. Making sure you record a patient's condition accurately using the right SNOMED code in the UK or other type of clinical coding in a different country and making sure that you don't put things in the medical record in words that would be potentially harmful to patients were they to see them or even just making sure electronic health records are complete, whether that be asking your patient if it is okay to record their ethnicity. It's a huge responsibility, and they're new tasks that we did not traditionally need to trust people like clinicians to do, and we do now. The point being that, transitioning towards this system of P4 medicine is not just about making data or medicine data driven. It is actually in fact transitioning the whole model of care.


The thing that is really important is that this is happening inside a largely ungoverned black box.


And the reason for this is because all existing modes of governance have been disrupted.


The law is really out of date. Data protection law is broken, medical device law, consumer protection law, discrimination law, liability law. To give a couple of examples of what I mean by that, particularly from a UK perspective, data protection law traditionally sees legal uses of healthcare data being based on one of three things. That might be direct care, it might be service analytics, or it might be research. Things like clinical decision support software that is based on machine learning is in fact doing all of those three things at once. You run a machine learning model over the top of electronic health records to identify which patients need to be called in for screening, for example, because they are at a higher risk of a particular condition. That is direct care. But the model is also learning over time, greater predictive accuracy of that particular condition by looking at more factors associated with the risk, by looking at what works and what doesn't work. That is research, because it is updating itself in real time. And then service analytic aspect is that these models are also often passing information back up to, for example, policy makers who are monitoring compliance with standards of care.


So that's data protection law.


Medical device law, vast majority of these types of tools sit outside the current remit of medical device law. And medical device law does not know how to deal very well with things that are adaptive. It's already struggling with machine learning, how on earth it is going to deal with generative AI is really, really unclear.


And then liability law is the big one. This is one that gets talked a little time. Who is liable if an algorithm, say for example, algorithmic clinical decision support software misdiagnoses a patient and that patient comes to harm? Would that be the doctor who was in the room running the clinical decision support, who gave the diagnosis of the CDSS without having questioned it? Would it be the people who trained and developed the model? Would it be the organization who decided to implement that particular model withoutnecessarily checking that it had been validated properly? Would it be those who collected the data in the first place? This is all open. The way that the law currently seems to be, or at least is implied that it will be interpreted, is that liability would sit with the clinician, but that Is up for debate and I don't necessarily think that's where it's gonna sit.


The other aspect of this of course, is negligence. So, if you are found to be liable, then you may also be found to be medical medically negligent. But the way in which medical negligence is currently dealt with in the courts is, would another clinician of the same skill level have made the same decision in the same circumstance as the clinician that is on trial, so to speak. Now, when we have an algorithm in the middle of that situation, what does that look like? Are we then saying that the gold standard or the model of care that we are expecting that clinician to have achieved is what an algorithm was done? would Another clinician have acted in the same way, haven't been given the same information by an algorithm or are we talking about it being an algorithmic versus algorithmic comparison? Would another algorithm have given the same result in the same situation with the same information? We don't know.


Medical ethics are also unfit for purpose in this scenario. So medical ethics are your sort of traditional bioethical principles. Autonomy, justice, beneficence, non-maleficence are designed to protect one person from one clinician. Remember what I said about 20th century model of care being a one-to-one relationship? This is where we really see that playing out. Medical ethics are designed to protect individual patients from the actions of individual clinicians. Do I know, as a patient, why my clinician is recommending that I take this particular drug? Have I consented to taking the drug? Have I been involved in shared decision making? That is a way in which we discuss medical ethics. But the thing is, with digital health, that's not how it works. Remember what I just said about there being issues to do with it not being personalized, it really being targeted advertising or stratification. There is no framework for how we deal with medical ethical issues that operate at a group level. it's unlikely that an ethics group inside a hospital would be having a discussion, for example, that would look like, if we implement this particular software, what are the downstream effects on different groups of patient population that we represent. And what's really important from the ethical perspective is that statistical accuracy is not clinical efficacy. What you see an enormous amount of literature doing is they will make a model and it will have a high level of statistical accuracy.Look, we achieved 90 percent accuracy, with our model. Therefore, it should be deployed in the healthcare system. But there is never any follow through to look at what actually happens to patient outcomes. What is it? Is it actually clinically efficacious in a way that is safe? And in a way that is ethically justifiable and even indeed socially acceptable. That follow through rarely happens and that is because there is no legal framework in mandating that it needs to because remember medical device law is broken and there's also no ethical imperative to do it because most of those problems that arise from not having done that follow through happens at a group level perspective rather than an individual level perspective.


And then finally, we need to talk about the fact that the necessary skills are lacking. So, we're starting to see issues to do with automation bias, with the loss of autonomy, and there's no means of questioning those decisions.


Automation bias is this idea that computers are always right just by being computers. They are perceived to be being more objective. They are perceived to be more accurate. Remember, again, statistical accuracy is not the same as clinical efficacy. So this is a real problem because unless clinicians or patients, feel as though they have the necessarily skill, and therefore the necessary degree of autonomy, to question the outputs of these kinds of systems it's unlikely that they ever will, and therefore how are we going to make sure that they are right? And this is a, move away from evidence based medicine, because that point I made evidence based medicine being about contextualization.


If all that is happening is that clinicians, patients, policy makers are bowing down to algorithmic clinical decision support without doing that contextualization, then we're not really being evidence based in the true meaning of the word, hence this transition from the 20th century model of care to the 21st century model of care.


And this move towards 21st century model of care and away from 20th century model of care is causing a fundamental shift.



This is ultimately resulting in personalised unwellness.


 

Why is this shift occurring? All of this transition from the 20th century model of care to the 21st century model of care is resulting in these three transitions: outsourcing knowledge about the body, it's disrupting the fundamentals of care, and it is pushing a narrative that universal coverage equals equitable access, which it does not.


To start with this idea of outsourcing knowledge about the body, by moving in this direction of people medicine we are changing what counts as evidence of illness and its absence and we are also undermining the right not to know. Now what do I mean by this? Algorithms have no semantic understanding. They are very good at recognizing quantitative information. They can very easily record normal, abnormal, normal, abnormal. They can very easily understand height, weight, temperature. But, as anybody who has ever dealt with a patient, or has been a patient themselves, or has been a parent, they will know that most people do not know they are ill in quantifiable ways. They are much more likely to say things like, I don't feel very good, or I'm not feeling well. Not feeling like myself, I'm a little bit under the weather, my child is not acting like themselves. That is all haptic sensations, so it's all to do with how that person feels or how they feel that their child is behaving. And that relies on semantic understanding, which algorithms do not have. So, what happens if, for example, we're using clinical decision support software, a patient goes to see their doctor, rather than the doctor genuinely listening to that patient, they are just typing the symptoms into the algorithm/ the Clinical Decision Support Software. The algorithm comes back and says “No, you're not ill.” Nothing in here is flagging to me that you are ill, but the patient still says that they don't feel well. Who is right? The patient or the algorithm? Was it the patient's knowledge about the body that matters the most or the algorithm's knowledge about that person's body? And that is the point as well about the shift from 20th century to 21st century being away from patient centric care to digital twin centric care.


And we're also seeing the result of this undermining the right not to know. The right not to know is a meta ethical principle. This is the idea that you have a right to determine what you know about yourself and when it is helpful or harmful to you, because it helps protect your integrity of self. This idea that you know what is controlling your life and what is having influence over your life. And so, you have, in theory, a meta right to say “actually, I don't want to know if in ten years I'm going to develop a condition that absolutely nothing can be done about.” Because that is more psychologically harmful to me, living with that knowledge, knowing it's coming. Then it being a surprise when it does happen. We have seen this principle be applied very well in genetics. If you are going to get a genetic test, you have to go to genetic counseling. You have to have made sure that you've had conversations with your family. None of that sort of protective mechanisms are being put in place in terms of digital health. But what we are seeing is systems that might do background level screening of risk for particular conditions, and then maybe push notifying that patient to say, Hey, You are at risk of X condition.All of that is undermining this right not to know. And really taking away from patient's autonomy, and that has a real significant harm on patients’ integrity of self, and this idea that they know what is happening to them, and the factors that are influencing their lives.


Then this shift is also resulting in a disruption to the fundamentals of care. The fundamentals of care are very much based on trust about there being a meaningful relationship between a patient and a clinician, whether that clinician be a GP, a hospital practitioner, patients are supposed to believe that person is acting in their best interests. And as I have said before already, in that 20th century model of care, there was a real push away from paternalism and a real push to try and rectify the power imbalance between patients and clinicians.


Now we are seeing this now completely disrupted. Everything is now happening in a black box, we are no longer having a relationship between a clinician and a patient. We have a clinician, a patient, and an algorithmic black box that is sitting up there that might be making decisions or recommendations in a way that we do not understand, and that might be being manipulated by different organizations, by different bodies, by third parties, increasingly private parties that are designed really to optimise profit. And that is changing this power dynamic because how can you try and rectify a power imbalance or an informational imbalance between a clinician and a patient if the clinician themselves does not know why they are trying to recommend a particular treatment because they are simply listening to the algorithm and the algorithm is uninterpretable.


We're also seeing a real devaluing of the ethics of care. That is about things like compassion and empathy. Again, this comes back to this point, that algorithms have no semantic understanding and that they rely very heavily on quantitative information rather than qualitative information. We have an enormous amount of evidence that shows that healthcare outcomes are better when people feel like they have been listened to and they have had an empathetic interaction with a clinician. An algorithm can never be empathetic. It can mimic empathy but there's no meaning behind that. It doesn't understand what that means. And we don't yet know. Whether or not, because it doesn't know what it means, that is genuinely mimicking the effects of empathy, i.e., does it have the same effect on improving the experience and therefore improving the outcomes of care? We cannot know.

And then ultimately the big problem is that we're challenging accountability. This comes back to that point I made before in the 21st century model of care about there being a distributed model of trust. We don't know the source of things that go wrong. If an algorithm misdiagnoses someone, was it because the algorithm is trained poorly? Was it because it was given inappropriate data? Was it implemented wrong? Was it not validated? Was it never supposed to be deployed in a clinical setting of that nature? All of this is really disrupting the nature of accountability.


And then very lastly anybody who's paid attention to a lot of this narrative that surrounds the P4 medicine and the so-called triple aim will know that one of the main reasons policy makers often push or one of the main sort of legitimate legitimization arguments that they use is, oh, this will improve access. People have trouble accessing in person care if we create an algorithmic system and enable people to be treated by triaging algorithms or chatbots and we give them apps to enable them to monitor their health at home we have somehow managed to achieve a real improvement in the access of care. Now that is a really simplistic understanding of what access means. It's also really underestimating the complexity of what it means to make sure people have genuine access to these technologies. At a super basic level it might just be that people don't have the device at a more slightly complicated level there are also variations to do with people's digital health literacy. So their ability to interpret information and to act on it that has implications. Then there are, of course, issues to do with bias. There is, again, this underlining narrative that runs through a lot of this stuff that algorithms will be more objective than humans. Oh, humans are sometimes really not good at being objective. They are really biased towards women. We know this is true. We know that clinicians don't take women's pain as seriously as men's. And we know that gets worse when we're talking about women of colour, for example, or other forms of intersectionality. But algorithms that are trained on the data that is produced by those clinicians in the first place will be exactly the same. They will behave in exactly the same way. There is not some magic happening in them that will somehow make them more objective if the data they are trained on is biased. Therefore, we will see exaggeration of bias.


And then there are also new sources of bias, because there are new patterns. Machine learning is a giant pattern recognition machine. So, what it is doing, it doesn't necessarily know how to differentiate between signal and noise. It is, in fact, just recognizing pattern. There's a pattern, there's a pattern, there's a pattern, there's a pattern. We don't know whether those patterns are meaningful, and those patterns might, in fact, be harmful. What happens, for example, if an algorithm Or clinical decision support software or public health screening tool, start seeing that people who have purple hair and blue eyes and who are five foot seven are more at risk of condition X and they might be discriminated against. We've seen this happen in history. We have seen it happen with stigmatization of conditions like, HIV, with specific mental health conditions. There is a very high likelihood that by using ML or any other type of digital technology to recognize more patterns, we will in fact just see more patterns of discrimination and create new sources of bias. And thus, universal coverage is absolutely not the same as equitable access.



Ultimately, all of this is leading to the creation of what I call the inverse data quality law.


The inverse care law, is about 50 years old. It was written in 1971, I think, and now this is adapting it for quality of data. So, what I'm now saying is that the availability of high quality medical or social care data will vary inversely with the need of the population served. What I mean by that is people who have access to high quality data they generated about themselves through the use of wearables, through the use of regular access to in person health care service, and all of this stuff that monitors their health care in a great amount of detail and in a greater level of quality.So, in depth and breadth, those are the types of people who are actually least likely to have health care needs. Whereas the people who are more likely to have health care needs are less. likely to generate high quality data about themselves. That might be people who, for example, have no fixed address. It might be people who don't feel safe accessing healthcare because they don't trust it or because there are legitimate reasons for not doing so.  They might not be able to afford the use of the latest smartphone or the latest Apple watch. And so, we start seeing this pull where the people who have the greatest healthcare needs have the poorest quality of healthcare data and therefore will have the poorest quality outcomes if we are transitioning towards the 21st century model of care which is entirely reliant on data.


So, to give a couple of examples of this, people might have seen the paper that my friend Joe and myself and a whole bunch of other great people published looking at NHS data flows. We mapped all of the electronic patient data that flows across England. We showed that data was flowing to more than 460 subsequent institutions, whether they be academic, commercial, or public data consumers. And so, the main point in this context of that paper is that because the flow of data so high and there is so much obfuscation, we cannot tell whether that information is flowing in an equitable way. As in, we cannot tell whether there are equal amounts of information flowing about types of people. So, does everybody in the population have the same level of attention being paid to their care through research, through analytics, through the development of tools? The answer is probably no, but because those Flows are so nontransparent, we cannot tell, and we cannot do an evaluation of the equity of those flows and therefore the equity of access and the equity of outcomes that might result.


There's also the fact that this multistage data flows really limit transparency and have a very negative impact on public trust. We already know that the opt out rate in the UK, for example is over 5 percent now because there is such a limited level of trust in the NHS's ability to control data. And we know that the people who are likely to opt out are nonrandom. So therefore, the NHS's data becomes more biased in ways that we do not necessarily know, and we do not necessarily understand. And therefore, we cannot rely on the fact that data is genuinely representative of the whole population if a nonrandom subset, has opted out.



This is just some graphs from that paper to show this scale of where the data is flowing. It is flowing to multiple places. And you can see the breadth of organizations. The largest is universities and the NHS is actually a relatively small customer of its own data. This is, again, something to be aware of. If we are talking about the fact that the NHS is a public system, then we are starting to create a system where private companies that have different types of motives might have greater insight into the way in which the NHS is functioning and into the population needs than the NHS itself. And that will ultimately have implications for the way in which services are designed and who they are designed for.



Second example is apps and the quality of the evidence that they actually work. This is an ongoing study that we have not yet published, but we hopefully will in the very near future. We scraped the US and the UK app stores, specifically the iOS app stores. We resulted with 153 U. S. apps and 170 U. K. apps. Roughly half in each claim that they are delivering some kind of quantifiable impact to healthcare. They are, for example, reducing your symptoms of anxiety by a specific percentage. We help you manage your depression. The vast majority of the claims are very vague, as in they're more like that second example than the first. We will help you manage your depression. What does that mean? Nobody really knows. But, what is most important is that over 90 percent of apps in both of these stores say that they have evidence to support their efficacy claims but the quality of that evidence is exceedingly low.


This is what we audited.



So just a sort of very brief overview, the paper when we publish it will go into far, greater depth, but you can see that in the UK, for example, the vast majority of evidence supporting very specific claims of efficacy was low. And then you still had almost a quarter falling in the very low category. Then in the U. S. it's even worse. We have almost a third, falling within that very low category. Now that categories basically mean, if it's very low, it is just evidence of the number of people who have downloaded an app. For example, we're saying, oh, because. 25, 000 people have downloaded this app. We have helped 25, 000 people manage their depression which is not necessarily true. Those 25, 000 people might have deleted that app the next week because it doesn't work. Then, sayings that fall in the very low evidence category are borrowing of evidence. So, this happens a lot with mental health apps, for example. You will see how mental health apps claim that they have evidence. What they mean is that there is some evidence that CBT, so Cognitive Behavioural Therapy, for example, works when it is delivered by a person to another person and then they are applying that and saying it applies directly to the app that has digitized cognitive behavioral therapy. This is, again, not really true. We have no proof that just mimicking aspects of a person in person-to-person intervention and digitizing it will have the same level of efficacy.



The consequences, personalised unwellness.


We are seeing the development of a two-tiered system between the worried well and the ignored sick. So, the people who are in the worried well are those who generate high quality data about themselves, who therefore constantly feel as though there is something that they could be doing in order to improve their healthcare.


If anybody has ever heard the adage, if you go looking for something, you will find it. Those are those people. They are the worried well, and then the ignored six of the people who do not generate health care data or high-quality data about themselves, but who might actually need access to health care, but cannot get it or get, lower quality by being given purely, access to digital devices, rather than being given access to human health care.

The second part of this is that we are seeing a shift where health is no longer the absence of illness. Instead, health is a constant state of improvement. Everybody can always get healthier. We are seeing this being fuelled by private algorithms and black boxes. And what really matters in the context of personalised unwellness is that this is changing the dynamic of the sick role.


So, the sick role is a theory, it's a philosophical theory developed by Parsons that basically says, that a person fulfills the sick role if they become unwell through no fault of their own and they did everything that they were tasked with doing by a healthcare professional in order to get well. Those people are fulfilling the sick role, and they are getting as a result. social health capital. They are getting the right to be treated well because they are good patients who fulfilled the sick role. Now who is good? Is it a person who is constantly striving to improve their health and is never unwell? Are they the good patient? Or is it the person who sees that they have a risk of a particular type of disease happening in the future and does absolutely everything that an algorithm tells them to do, even if there is no evidence that will actually prevent the condition from developing in the first place? We really don't know.


And so ultimately this is changing patterns of responsibility. We are really shifting responsibility of maintaining wellness or constantly becoming healthier away from the state or away from healthcare systems to individual patients. This is often hidden in narratives around empowerment, this idea of we're going to empower patients to take better care of their health by giving them an app that is going to tell them every single possible health care outcome in the world. Whereas what actually that is saying is we've given you all the information in order to prevent your health care and You didn't do anything about it, therefore you are responsible if you become unwell, which means you cannot fulfill the sick role, because remember the sick role you became unwell through no fault of your own. That means you do not gain sufficient social capital and therefore you cannot be a good patient and you cannot be treated as such. Hence, we get this system of personalized. and wellness.



I've said all of the bad things. I've been very negative.


We cannot only be negative because we know that these technologies, if they are used well, if they are designed well, can have a positive impact on health. And anything that does have the potential to have an impact on health we have an ethical imperative to investigate their use.


What I am saying is we need to rethink how these technologies are framed. How we are relying on them, how we are using them.


So, to rebrand digital health, I think we need to do three things. We need to recognize the information sphere, or the infosphere, as a social determinant of health. That is the informational environment within which we all live, is having a direct impact on your health. or on people's health in the same way that the ecosphere, so the environment, the physical environment, which we live, also has a direct impact on people's health. We need to recognize that digital health is public health, not personalized health, so it is happening at that group level. It is risk stratification or targeted advertising for health, not personalized health. And we need to focus this system's attention on information needs rather than information wants.


We need to get away from this idea that we can predict every single thing possible, and that will automatically be a good thing.


Let's break these concepts down in a little bit more detail. So, first infrasphere is a social determinant of health. This is a relatively small picture of a model, but it's adapted from a model that was produced by Dahlgren and Whitehead in 1991 that showed the different influences on a person's health. The bottom level is things that cannot be controlled, and the top level is things that can be controlled. So, you cannot control age, sex, and, genetics. Governments have a degree of control over socioeconomic, cultural, environmental, and infrastructural conditions. That is the person's personal world of information. It's constantly evolving through time, and that has a significant influence on a person's behavior. In the same way that things like unemployment, access to water, housing, education, work, and food has an impact on a person's health, the person's personal world of information has an impact on that person's health.



To be more specific, the information that is recorded about individuals, some information that is recorded in, for example, electronic health records, but also wearables. The information consumed by individuals, whether that be through an app, social media, browsing the web. The information generated about individuals, so information produced by apps, wearables, clinical decision support. And information produced by individuals themselves, that might be what they post on social media, for example, and therefore what people, other people, end up consuming. All of these things, so all of these infospherical elements, are now having an impact on health outcomes.



Then, moving on to thinking about public health, we need to think about the ethical implications, and therefore the legal implications, of all of these technologies, not just at an individual level like you can see here, but what happens at a group, institutional, sectoral, and societal level. For example, if we're talking about inconclusive evidence, so epistemic problems to do with the fact that algorithms might be trained on biased data or data that has problems with missingness, etc. Misdiagnosis or misdiagnosis is likely to happen at scale. Some groups will be more affected than others. We already know. For example, even very basic things like wearables that claim to be able to detect, things like arrhythmia work very well for people who have white skin, and they work far less well for people who have darker skin. And that is just a simplistic example when we factor in all of the many ways in which healthcare data might be biased and these models might have differences in accuracy, you can see how some groups are likely to be more affected than others. And at a societal level, we will start to see poorer public health care provision for the groups whom are more likely to be affected. And therefore, we see worsening health outcomes for society. This causes transformative effects because we're seeing inequalities in outcome. And we need to, therefore, decide, society must decide what is it that we are actually trying to achieve? And whose desires does that reflect? Because it's probably not everybody's. In fact, it's probably relatively few.



The last little bit. I've told you the problems, I've told you why I think we need to reframe our digital house and exactly what we should do in terms of reframing. Now I'm going to talk a little bit about how I am trying to enable that transition.

 


So, in order to achieve that rebranding and that shift away from information wants towards information needs, I think there is a need for research to pay attention to these categories of information or these different elements of the infosphere, which is now we have seen a social determinant of health and information needs.


And therefore, a thing that impacts public health. We need to think about indirect to patient information: user generated content, including social media, information that people see on websites and targeted advertisements. We need to think about direct to patient information. This is the information that is presented in apps. Are they efficacious? Do we have evidence? Are they safe? Is the content good? We need to think about wearables. Do they work for everybody? Increasingly, we need to think about large language models and the use of them in triaging, in diagnostics.


We also need to think about indirect clinician information, so that's everything from medical research through to medical insurance, direct clinician information. We need to think about how we implement clinical decision support software, how it's regulated, how it's validated, and medical recommendation systems. And then finally, we need to think about direct to policy information. So that would be systems, for example, involving public health surveillance and service analytics.


And by attention, what I mean is we need to think about how these systems are designed, who they are impacting, how they are regulated, how they are governed, what are the ethical implications, are they socially acceptable, are they ethically justifiable, are they technically feasible, is it legally compliant. Those questions have to arise for each of these different categories of information. each of these different elements of the infosphere in order to make sure that we are designing a system that is what we want rather than one that is resulting in personalized unwellness.



The specific actions that I think researchers need to take is generate evidence, for example, through audits. I already described the App Store audit, but there are others that I'll mention in a second. I need to develop theory, things like conceptual models and frameworks that explains why this matters and how it is happening. And then pilot solutions through things like applied ethics tools and standardized processes.


Let me give you a couple of examples.



So, auditing LLMs, this is one project that is about to kick off. These are a couple of papers I've published with some colleagues of mine. Nicholas DeVito and Ben Goldacre are in the past looking at clinical trials. How well published are these things? What are the barriers to generating that good quality evidence? Or what are the barriers to reporting clinical results in the way that they are supposed to be in compliance with the law? And now I want to repeat the same thing, looking specifically at the evidence supporting machine learning and in most likely, supporting,the use of large language models. So, what is the evidence? Are there clinical trials running claiming to be using these types of technologies? And when we last looked, there was about 500 registered clinical trials claiming to use machine learning or, artificial intelligence. Are those trials high quality trials? Are they trying to do things like outcome switching? Are they reporting the results? Where are they coming from? Is there a high likelihood of bias? Et cetera, et cetera. Then talking to people who are doing trials. What are the barriers that they face to generating good quality evidence? And then where are the edge cases? So, this is a little bit more technical. This is about, for example, using the idea of adversarial learning to attack or be adversary to existing large language models in order to understand where the edge cases are. So where do they break down? Is it for particular conditions? Is it for particular types of patients in order to find the safety issues and therefore to think through how they might be regulated? Then this is an example of theory development in terms of thinking through how you might successfully implement algorithmic clinical decision support software.



I think we need to be thinking about designing information infrastructure that provides the system with epistemic certainty, robust information exchange, validated outcomes. after they were attacked in values, autonomous staff, and meaningful accountability. And that means that policymakers have to pay attention to all of these areas. They need to think about the whole entire pipeline of information in order to enable epistemic certainty. We need to be thinking about data, quality, data, quantity, data, interpretability.


With robust information exchange, there needs to be policy and regulation regarding things like system integration and interoperability. Right now, if you were to take the UK as an example, we might be in a situation where the providers of electronic health records are not likely to be the people who are also developing clinical decision support software or algorithmic clinical decision support software. But they will be the people who are responsible for implementing and deploying those systems in clinical settings because nobody wants to develop a system where a clinician has to simultaneously be in an electronic health record and on a different screen to access algorithmic clinical decision support software because it's inefficient, it's confusing, it's likely to result in unsafe workarounds and result in negative outcomes.


We need to think through the validated outcomes. So how do we test these things? We need to subject them to robust clinical evaluation, but we also need to subject them to ongoing outcome monitoring. That needs to be standardized in some cases, for example, with foundation models or generative AI, those processes need to even be invented.


We need to think about actively protecting values of healthcare. So, we have created a little bit of a system right now where the people who are developing most of the technologies that are being implemented into healthcare do not come from a healthcare background. They are people who have phenomenal technical skills, but they do have not have any understanding or experience of the complexity of care and they need to because they might not understand the importance of things like patient centricity of making sure healthcare is useful to everybody.


Autonomous staff, so this is about making sure people have the right to override algorithmic clinical decision support software. It's making sure they have the, Legal protection from a liability perspective. It is making sure that they have training in how these tools are used and their meaningful accountability is making sure that there's a proportionate governance framework. So, we need to move away from this idea that innovation stifles regulation. We have to flip this narrative on its head, away from innovation friendly regulation to regulation friendly innovation.


And then finally piloting practical solutions.



So, this is Charm. This is a project that is ongoing with a huge number of people, mostly myself and Joe Zhang over in Imperial, looking at creating a checklist for actioning responsible MLOps being responsible machine learning. So we're trying to take the idea that software engineers know how to develop software responsibly because they know how to develop reliable software and clinicians and clinical systems know how to make sure that care pathways are responsible and well governed. And we need to bring those two aspects together. So what are the different processes, the technical and the clinical or managerial that would arise at each of these stages and in the MLOps life cycle. And that is being developed with a large group of people and hopefully once we have the whole checklist, we can do some pilots of what that actually looks like and try and see if it works in practice.


And then that's the end.



So, thank you very much if you have listened all the way through.


You can find out more about me on my website, which is down there, or you can follow me online. And I hope you find this interesting.


181 views0 comments

Recent Posts

See All
bottom of page