top of page
Search
Writer's pictureJessica Morley

2024 Weeknotes #5

Introduction

The end of the first full week in February (also known as the first week anticipating the forthcoming new Taylor Swift album) and I now have too much work to do and 550 Italian words committed to memory. This is both predictable, and largely a good thing. Overly is my comfort zone and it helps with the home sickness. The tricky bit is finding a balance between being sufficiently busy to keep the brain active and home sickness at bay without triggering good old panic disorder - a constant battle that I do not always win. Even when I lose it's worth it tbh because I love feeling excited about work and I most definitely am.


Anyway (comunque), on with the weeknotes!


Things I worked on

  • Second order Trust in distributed systems. As I mentioned in my previous weeknotes I am currently trying to develop a theory of 21st century digital trust for highly distributed systems involving multiple interacting artificial and human agents. I believe that part of the reason we see trust declining in so many digitally-enabled industries and services is because whereas once the components of trust could be found in a single 'agent' now the components are distributed both in a 'geographical' sense (i.e., in one place and in another place) and in a 'levels' sense (i.e., one component arises at one level of distributed network and another at another). It's a very fun thought experiment, but I think has some fairly significant implications for policy development and the regulation of AI in particular. This week, I began to work on this idea properly, developing the idea and getting stuck into the reading (see below)

  • Theory of Infrastructure and design. The first chapter of my thesis outlines a theory of information infrastructure design that pulls from the logic of design as a conceptual logic of information developed by Luciano Floridi, systems theory developed by Donna Meadows, organised vs disorganised complexity theory developed by Weaver, and systems architecture by Cameron and colleagues. In a nutshell the overarching argument is that information infrastructure plays such a significant role in today's society (specifically in health in the context of the NHS) that its development cannot be left to chance (i.e., adhoc patchy development or a 'thousand flowers bloom') but nor can it be controlled by large top-down Government-led initiatives that try to impose decontextualised rules upon the entire system. Instead, there is a need to design the information infrastructure system in a way that recognises form follows function i.e., we have to decide what it is that we want information infrastructure systems to achieve before we can identify the necessary infrastructure (policies, technical architecture, people skills etc.) requirements that will need to be 'built' or 'implemented.' I am now working on turning this chapter into a separate paper that will use AI in healthcare as a case study, but make a more generalisable argument for policymakers. I was originally planning on working on this paper later in the year, but the publication of the UK Government's response to the consultation on the AI white paper pushed the timeline up. See things I thought about below for more.

  • Presentation for med school. On Wednesday of this coming week (valentines day), I am presenting my work to a couple of groups to the Yale Med School to hopefully spark some conversation and find opportunities for collaboration. I have, therefore, spent an inordinate amount of time this week prepping the presentation - it builds from the presentation I gave to the DEC a couple of weeks ago, but is at a more granular level of abstraction.

Things I did

  • Presented our Apps paper to the wider research group. If you've read previous weeknotes, you'll know that I've been working on a paper auditing the quality of the. evidence supporting the claims of efficacy made by medical, health, and wellness app available on the US and UK iOS App Store. This week my colleague Joel Laitlia presented our initial findings to the wider research group at the Digital Ethics Center. We got a lot of questions, which is always a sign that the research is interesting.





  • Presented to the Teens in AI Action forum on the ethics of AI in healthcare. - focusing on the importance of being ethically mindful. A great opportunity to engage with the future leaders of AI. Some great questions, in particular on the role that empathy may play in getting AI right for healthcare.


  • Recorded a podcast for Digital Health Rewired ahead of being one of the keynote speakers at the Conference in March.

  • Appeared in the Guardian. Technically this is not something I did directly, but the wonderful Jeni Tennison previously of ODI and now of Connected by Data wrote a great op-ed in The Guardian, warning Labour not to see AI as a silver bullet for the NHS and to focus on genuine preemptive public engagement to avoid Tory tech solutions. The Op-Ed both quotes me and cites my entire thesis (which is pretty cool!). The op-ed is here - it's well worth a read (though I'm biased obviously).



Things I thought about

  • Ethics of population-level algorithmic risk stratification. As part of my thinking on the concept of "personalised unwellness" I spent a lot of time thinking about the ethical implications of effective continual algorithmic risk stratification. So not risk Strat tools like "Q-Risk" that are run by clinicians in a clinic setting, but risk stratification algorithms that use ML, and run in the background, to identify people who are potentially at risk of specific conditions so they can be invited to participate in research studies or encouraged to take preventive action. Some of this draws on a paper I'm writing about how AI might undermine the values of the NHS, but in brief, I think the main concerns are:

    • Consent & Privacy - how would this work practically? Would patients be able to consent/ opt-out of their data being used in this manner, if they would prefer not to be alerted to potential risk? If not possible to opt-out, how would this subvert data protection law? how would it undermine the meta right not to know? how would it harm self-integrity and autonomy?

    • Discriminatory Inferences - it's not necessarily the modeling itself or even the 'recall' of at risk patients that presents a problem, but the potential for discriminatory inferences to be drawn about patient populations that are identified as being at greatest risk and why this might be the case.

    • Over-medicalisation of life (correlation is not causation) - ML derived risk stratification models are designed to take into account as many data points about a person as possible to 'personalise' the risk prediction. This 'digital phenotyping' activity can involve real-time analysis of data generated about every aspect of a person's life - from what they eat, to how much exercise they do, to how stressed they are etc. Whilst this might result in more targeted predictions, we don't actually know that this is true (there's currently insufficient evidence). In addition ML models are excellent pattern recognition machines, but they don't identify causality. This could mean that all we end up doing is over-medicalising every aspect of life for purposes to derive ineffective correlations.

    • Shifting responsibility - finally, there is the continuing issue I have with the overarching argument of 'empowerment' i.e., the idea that by giving people information about their future health prospects they will somehow magically become empowered to do something about it. This negates the fact that people cannot always interpret health information, or might not have the means (or desire) to act on the information they do instead. So, in reality, empowerment is really shifting the responsibility for maintaining health from the state and healthcare institutions to the individual. Using ML to derive very targeted predictive models, just puts more pressure on individuals to control more aspects of their lives for the purpose of preventing potential future ill-health.

  • Tony Blair Institute Report. At the end of January, the Tony Blair Institute published a report suggesting that the NHS should create a data trust that "would treat NHS data as a competitive asset whose value can be realised for the benefit of the public. This would involve providing anonymised data to research entities, including biotech companies, in return for financial profit that would then benefit our health service." This has been widely interpreted by the press as being a suggestion that the NHS should "sell" its data to fund AI and biotech innovation. Whilst this is not exactly what it says, I've still spent a long time thinking about why the narrative that the NHS data is 'profit-making asset' is problematic and yet continues to perpetuate. Expect more on this in the very near future, but in brief, my main issues with the "selling" argument are as follows

    • ⁠Public trust is about more than privacy and privacy is not sufficiently protect by deidentification. Selling is generally deemed unacceptable to both patients and publics. This is in part because NHS data is perceived to be a common good and there’s no way to guarantee shared equity from shared output via commercial routes.

    • "Selling" implies a transfer of data ownership which would complicate things in a number of different ways including loss of control of use case.

    • ⁠⁠The "value: of NHS data isn’t static, it’s very likely that the value of NHS data will go up after its sold following curation processes meaning it could then be sold for more profit - taking away potential income from the NHS.

    • Siloed data is already a major issue and this would likely worsen it - increased value = increased desire to act like a fiefdom.

  • UK Government response to AI consultation. This week the UK Government published its long-awaited response to its consultation on its 2023 AI White Paper. I read it the morning it published and I had rather a lot of thoughts that I shared on the day, and below: First, the response is comprehensive & acknowledges future need for legislation. BUT the rhetoric used is still concerning and there's still no investment in underpinning infrastructure. From a rhetorical perspective everything is back-to-front IMHO:  Innovation-friendly regulation not Regulation friendly innovation; Data protection for innovation and privacy not privacy and innovation; CDEI = responsible adoption not responsible development. All this implies that the number 1 priority is to get AI adopted into all sectors of society ASAP to maximise efficiency gains & monitor the consequences later. Many AI harms appear far earlier in the development chain & are ignored by this approach. Indeed the whole "Safety-First" approach implies that all AI harms are quantifiable and therefore readily preventable through technical interventions that can be tested pre-deployment. This is categorically not true harms and safety issues are not one and the same. Social acceptability & ethical justifiability fall outside the remit of safety. Paying passing attention to bias & discrimination is an insufficient acknowledgement of the true societal risks posed and suggests it is possible to create non-biased AI (it is not). The principle appropriate transparency and explainability is also light - appropriate to whom? In the context of the rest of the paper one assumes appropriate to private 3rd parties most concerned with protecting IP - not appropriate to public concerned with harms & privacy. Furthermore, stating that regulation must work for innovators and priotise ease of compliance above all else, ignores the fact that regulation can be designed to ensure this without being hands-off. It would be better to develop long-lead time regulation with relatively flexible clauses than to rely on voluntary measures for too long. Voluntary measures don't work - look at interoperability in the NHS . Waiting too long may result in chilling effects that could be avoided. From an infrastructural perspective. There's no indication in here about how rules developed might intersect with e.g., the EU AI Act, no acknowledgement of the need to drastically upgrade much of the UK's legacy public sector technical architecture & statements that the UK has the best universities & science etc. gloss over the fact that most UK PhD stipends and postdoc salaries are no longer competitive & the UK has increasingly overly restrictive immigration laws = creating a brain drain rather than a brain draw. None of this takes away from the amount of work I'm sure has gone into producing the consultation response. I know getting this stuff right is really hard, I just worry constantly about the message innovation > everything sends.

(A selection of) Things I read

  • Adam, Mary B., and Angela Donelson. “Trust Is the Engine of Change: A Conceptual Model for Trust Building in Health Systems.” Systems Research and Behavioral Science 39, no. 1 (January 2022): 116–27. https://doi.org/10.1002/sres.2766.

  • Asan, Onur, Alparslan Emrah Bayrak, and Avishek Choudhury. “Artificial Intelligence and Human Trust in Healthcare: Focus on Clinicians.” Journal of Medical Internet Research 22, no. 6 (June 19, 2020): e15154. https://doi.org/10.2196/15154.

  • Egede, Leonard E., and Charles Ellis. “Development and Testing of the Multidimensional Trust in Health Care Systems Scale.” Journal of General Internal Medicine 23, no. 6 (June 2008): 808–15. https://doi.org/10.1007/s11606-008-0613-1.

  • Gille, Felix, Anna Jobin, and Marcello Ienca. “What We Talk about When We Talk about Trust: Theory of Trust for AI in Healthcare.” Intelligence-Based Medicine 1–2 (November 2020): 100001. https://doi.org/10.1016/j.ibmed.2020.100001.

  • Höglund, Lars, Elena Maceviciute, and T. D. Wilson. “Trust in Healthcare: An Information Perspective.” Health Informatics Journal 10, no. 1 (March 2004): 37–48. https://doi.org/10.1177/1460458204040667.

  • Jabeen, Farhana, Zara Hamid, Adnan Akhunzada, Wadood Abdul, and Sanaa Ghouzali. “Trust and Reputation Management in Healthcare Systems: Taxonomy, Requirements and Open Issues.” IEEE Access 6 (2018): 17246–63. https://doi.org/10.1109/ACCESS.2018.2810337.

  • LaRosa, Emily, and David Danks. “Impacts on Trust of Healthcare AI.” In Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society, 210–15. New Orleans LA USA: ACM, 2018. https://doi.org/10.1145/3278721.3278771.

  • Sousa‐Duarte, Fernanda, Patrick Brown, and Ana Magnólia Mendes. “Healthcare Professionals’ Trust in Patients: A Review of the Empirical and Theoretical Literatures.” Sociology Compass 14, no. 10 (October 2020): 1–15. https://doi.org/10.1111/soc4.12828.

  • Sutherland, Bryn L., Kristin Pecanac, Taylor M. LaBorde, Christie M. Bartels, and Meghan B. Brennan. “Good Working Relationships: How Healthcare System Proximity Influences Trust between Healthcare Workers.” Journal of Interprofessional Care 36, no. 3 (May 4, 2022): 331–39. https://doi.org/10.1080/13561820.2021.1920897.

  • Webb Hooper, Monica, Charlene Mitchell, Vanessa J. Marshall, Chesley Cheatham, Kristina Austin, Kimberly Sanders, Smitha Krishnamurthi, and Lena L. Grafton. “Understanding Multilevel Factors Related to Urban Community Trust in Healthcare and Research.” International Journal of Environmental Research and Public Health 16, no. 18 (September 6, 2019): 3280. https://doi.org/10.3390/ijerph16183280.

  • Wesson, Donald E., Catherine R. Lucey, and Lisa A. Cooper. “Building Trust in Health Systems to Eliminate Health Disparities.” JAMA 322, no. 2 (July 9, 2019): 111. https://doi.org/10.1001/jama.2019.1924.

  • Bahtiyar, Şerif, and Mehmet Ufuk Çağlayan. “Trust Assessment of Security for E-Health Systems.” Electronic Commerce Research and Applications 13, no. 3 (May 2014): 164–77. https://doi.org/10.1016/j.elerap.2013.10.003.

  • Bilal Unver, Mehmet, and Onur Asan. “Role of Trust in AI-Driven Healthcare Systems: Discussion from the Perspective of Patient Safety.” Proceedings of the International Symposium on Human Factors and Ergonomics in Health Care11, no. 1 (September 2022): 129–34. https://doi.org/10.1177/2327857922111026.

  • Groenewegen, Peter P., Johan Hansen, and Judith D. De Jong. “Trust in Times of Health Reform.” Health Policy 123, no. 3 (March 2019): 281–87. https://doi.org/10.1016/j.healthpol.2018.11.016.

  • Hong, Hyehyun, and Hyun Jee Oh. “The Effects of Patient-Centered Communication: Exploring the Mediating Role of Trust in Healthcare Providers.” Health Communication 35, no. 4 (March 20, 2020): 502–11. https://doi.org/10.1080/10410236.2019.1570427.

  • Jermutus, Eva, Dylan Kneale, James Thomas, and Susan Michie. “Influences on User Trust in Healthcare Artificial Intelligence: A Systematic Review.” Wellcome Open Research 7 (February 18, 2022): 65. https://doi.org/10.12688/wellcomeopenres.17550.1.

  • Kittelsen, Sonja Kristine, and Vincent Charles Keating. “Rational Trust in Resilient Health Systems.” Health Policy and Planning 34, no. 7 (September 1, 2019): 553–57. https://doi.org/10.1093/heapol/czz066.

  • Van Der Schee, Evelien, Peter P. Groenewegen, and Roland D. Friele. “Public Trust in Health Care: A Performance Indicator?” Edited by Michael Calnan. Journal of Health Organization and Management 20, no. 5 (September 1, 2006): 468–76. https://doi.org/10.1108/14777260610701821.

266 views0 comments

Recent Posts

See All

댓글


bottom of page