top of page
Search
  • Writer's pictureJessica Morley

2024 *Weeknotes #8

Introduction

*Technically fortnightly notes but it's not as catchy

This Sunday sees me wiped out watching the eras tour at home whilst reflecting on the busiest two weeks of all time: back in the UK for a flying visit and bouncing between Windsor, central London, Birmingham, Oxford, and Cambridge. It's been an excellent trip, really re-invigorating, whilst also being absolutely exhausting. I think I have had more social interaction in the space of two weeks than I normally would in two years.


Anyway on with the notes

Things I worked on

  • Apps Paper: If you've read weeknotes before or seen my talk on rebranding digital health, you will know I have been working for a while now on a paper auditing the UK and US iOS app stores, and the evidence that is available to support claims of efficacy. The paper is now in its final stages of writing, and whilst sat on trains, I have been poking around with the structure and analysis to ensure it flows.

  • Justifiability of investment in AI paper: Likewise, I have mentioned previously that there has been a paper in the works for a long time on whether the current level of (public and private) investment in the development of AI for healthcare is justifiable based on AI's ability to meet identified healthcare system needs. This paper is now also in its final stages and I have been tweaking it whilst on trains over the last two weeks. Mostly this tweaking has involved checking to see if there is any relevant literature that I have missed - there have been, for example, a couple more health economic papers published on AI recently.

  • Video, blog, podcast content: I have been trying, for a while, to ensure my research is made as accessible as possible for the widest possible range of audiences as I want what I work on to be useful and not just academic. With this in mind, I have started to plan a series of 10-15 minute explainer videos (as well as some social media content) and a podcast with my good friend Joe, all focused on explanation and education rather than 'chat'. The aim is to try to enable as many people working in the health data/digital/AI space as possible to adopt a 'critical' (or as I call it 'skeptically optimistic') attitude.

  • Paper Reviews: I am currently drowning in paper review requests. I get anywhere between 4 and 8 a day at the moment. It is simply not possible to accept them all, but I do try to accept 1-2 every week which means I am near constantly working on reviews. The last two weeks, I have reviewed papers for BMC ethics, JMIR, and Social Science and Medicine.


Things I did

SO MUCH


  • Wrote an X thread/LinkedIn post on how I would spend the £3.4 billion investment in NHS IT announced in the spring budget. The original thread is here. It is worth pointing out that some people have questioned why I think that the GP record should be the single source of truth and (correctly) highlighted that it's not currently capable of doing so. This is one of those cases, I think, where the lack of character space on X is a disadvantage. I do not mean to imply in the below that the current set-up of GP records controlled by a duopoly is capable of being the single source of all health data collected by the system at the moment. Instead, I mean to suggest that I think this should be the overriding aim when decisions regarding records are made going forward. My reasons for this are two fold: 1) GP records are already the richest records (i.e., the most detailed) held by the NHS, it makes sense to further consolidate and to go to the source that has the greatest gravitational pull; and 2) continuity of care - primary care is the heart of the NHS (despite what current policymakers would have you believe) and it should, therefore, be the heart of NHS record keeping. Others, are of course, free to disagree with me but this is my opinion as it stands. Original thread copied below:

    • So, the #SpringBudget inc £3.4bn investment in capital to fund new technology for the #NHS to enable efficiency gains. Great, but let's hope it's spent wisely. Here's where I'd put my money ...

    • Focus on fixing the basics first and building incrementally, no grandiose statements and over-promises to make the NHS "paperless" or to #axethefax, nor creating "exemplars" but bringing the whole system up to a consistent basic level.

    • Yes, aim to make EHRs available everywhere, but not "just" available - also invest now in making EHRs standardised, interoperable, user friendly (involve HCI designers please), equipped with appropriate read/write API, and capable of supporting data portability. Ensure any investments in EHRs facilitate the overall ambition of making the GP record the single source of truth. That will require ensuring information can be added to it without requiring practice staff to copy & paste text in or manually add scanned pdfs.

    • Invest in the infrastructure to enable dynamic consent/ more nuanced opt-outs rather than blunt instruments that are poorly understood and often implemented badly. Make it possible to audit compliance with opt-outs across the whole system.

    • Make clinical guidelines computer readable and integrate properly with clinical decision support tools and audit and feedback. Make sure CDSS is well-designed (ideally centrally managed), audited, and outcomes-focused. No updates to care guidelines etc. in email attachments.

    • Drastically consolidate data flows. Richer data in fewer places. Think about how to connect auxiliary datasets into the main ‘pipeline’ e.g., registries, cohort studies etc. This will help with research sure, but also monitoring the service & communication across the system.

    • E-prescribing in secondary care. Just do it already.

    • Survey all NHS trusts, practices etc. to see what hardware is needed and then buy it. It's 2024 we shouldn't (but do) have wards running on COWs with a broken keyboard that staff had to borrow from another ward, and nobody knows whose job it is to fix. No more reliance on BYOD.

    • Create a central cloud-based archive for all posthumous data (please for the love of Taylor streamline the rules governing its use and storage first).Don’t forget about unstructured data.

    • Invest significantly in data curation, focus on creating NHS ‘algorithm ready’ datasets for training, validation and evaluation of 'AI' models (& simpler models claiming to be AI). This is more important than investing in AI itself -that will happen anyway. Make sure it's safe.

    • Also create a central mechanism for reporting IT-related safety incidents and regularly monitoring these. In general focus on audibility and documentation across the whole IT-system.

    • Don’t forget about care sector. Some of this money needs to go there too. The two systems interact constantly and poor IT-communication between NHS and (e.g.,) care homes results in patient safety issues & inefficiencies. Care needs access to GP records (with proper governance).

    • Finally, remember technical infrastructure isn't somehow entirely separate from the people who use it. Better IT won't magically improve efficiency without appropriate training, upskilling, pathway design, & more. Involve NHS staff in every single tech decision from the start.

  • Presented at the Nuffield Summit on AI. On the 7th March, I headed to the Nuffield Summit held in Windsor to first present for fifteen minutes on why implementing AI in the NHS is more challenging than it might appear to be in current policy rhetoric, and then to be on a panel with esteemed colleagues discussing the same. To say I was nervous would be the understatement of the century. The AI session had a lot of build-up and followed a speech by the current SofS Victoria Atkins and preceded a speech by CMO Chris Whitty. I was texting Luciano (director of the DEC in Yale) telling him I felt sick and swallowing anti-panic propranolol (*I have it on prescription for panic disorder*) before I went on. Luckily it went well and people seemed to respond positively. You can watch the session below, view my slides here, or read my thesis upon which the presentation was based here. I am planning to self-record the full lecture version of this talk ASAP



  • Presented on the ethics of AI in healthcare at an event held by the AI Ethics Society in Cambridge alongside four other panelists. Despite being in the 'other' place, the event was really good - it was nice to hear questions from students that showed real depth of thinking and hearing about the applied and forward thinking approach of MeditSimple was reassuring. A recording of the event (forewarning there were a number of technical issues that made it less than smooth) can be found here.

  • Presented on the ethics of digital transformation at Digital Health Rewired alongside 3 other colleagues - including Dr. Vin who is currently the interim head of Digital Transformation at NHS England. Another nerve-wracking experience. Rewired moved me to be a keynote on that main stage about a week before I flew over to the UK and there were more people in the audience than I have ever seen or spoken in front of before. I also forgot to bring my glasses to Birmingham so couldn't really see. More than anything I was not sure how my presentation would be received given it's critical nature and emphasis on the ethical risks and challenges posed by digital health transformation (creation of personal unwellness and the inverse data quality law). Fortunately, it went very well and has been described as a 'barnstormer' which I think is really a polite way of describing my tendency to bounce around the stage like a CBBC presenter. Unfortunately, it was not recorded, but you can view my slides here, watch a full version of the lecture below, and I am planning to self-record the 15 minute pitch once I'm back in the US.



  • Wrote a post on LinkedIn about the Government's independent report on equity in medical devices including software as a medical device. The cliff notes version of the post is: the recommendations are about equality not equity and there is really a need to focus on both, and the recommendations are too 'nice to have' when I think recommendations with real teeth are required. The text of the post is below:

    • On Monday DHSC published https://lnkd.in/gkAYkyvk this report on Equity in Medical Devices it includes 8 recommendations on preventing bias in AI-assisted medical devices. Great to see attention being paid to this area, but now let's discuss.

    • Whilst the recommendations consistently use the word equity, mostly they mean equality. There's an important difference. Equality = all the same (i.e.,non-biased datasets), equity = everyone getting what they need to achieve the same outcomes & this need might vary.  Recognising this difference matters, because whilst I think the recommendations might go some way to helping support equality in terms of e.g., equal statistical accuracy for different populations, I don't think they will support equity. I think this because equity requires a systems level view of the causes of inequity in healthcare outcomes. The recommendations don't do this.

    • The recommendations focus on the following areas: 1. engaging with diverse stakeholders in the design process of AI-enabled devices; 2. guidance & training for AI developers; 3. governance of AI-enabled devices; and 4. planning for the disruption of LLMs/ foundation models.

    • To genuinely target equity, the recs would also need to include some proactive equity-promoting (rather than bias mitigating) action. For example, prioritising the development of AI for populations currently underserved by the healthcare system (This is more than ensuring under-represented groups get access to research funding - though of course this is important and positive to see in the report).  Or thinking about what might be needed to ensure EHR design supports the collection of equitable information. This is not just about data standardisation. It’s also about what is prioritised at the point of collection e.g., how are drop-downs ordered?  There would also need to be much greater focus on the decision-making process that happens in collaboration between AI-enabled device and human decision-makers. It's entirely possible an algorithm might be equally accurate, but the outcome of its use might be inequitable.

    • Much of this is about focusing attention on the difference between information wants vs. information needs and ensuring AI-enabled devices serves the latter not the former. The healthcare system might have equal information wants for all populations, but the information needs relating to the healthcare of different populations will vary.

    • I also think the recs could go further. In general (except upgrading to MHRA risk framework) they have a ‘nice to have feeling’ rather than we are going to mandate this now. It’s a necessary start, but not sufficient. I'd like more teeth.

    • I say this not to criticise (the recs are good), but to encourage more critical thinking when policymakers begin to think about acting on the recs.

  • Gave a lecture to the MSt in practical ethics at Oxford. The lecture I gave is a version of the 'from personalised unwellness to algorithmically enhanced public health' talk that I self-recorded and embedded above. It was a long lecture, lasting 9 minutes as lots of the students were very engaged and asking extremely interesting and pertinent questions - particularly why we need to pursue both equality and equity at the same time.


Things I thought about

  • Where am I?

  • What day is it?

  • What time is it?

  • When did I last sleep?

(A selection of) Things I read

All reading for the paper I am planning on distributed trust (a concept that appears in the rebranding digital health talk)

  • Abbasi, Kamran. “Transparency and Trust.” BMJ 329, no. 7472 (October 21, 2004): 0-g. https://doi.org/10.1136/bmj.329.7472.0-g.

  • Blois, Keith. “Is It Commercially Irresponsible to Trust?” Journal of Business Ethics 45, no. 3 (July 1, 2003): 183–93. https://doi.org/10.1023/A:1024115727737.

  • Brown, A.J., Wim Vandekerckhove, and Suelette Dreyfus. “The Relationship between Transparency, Whistleblowing, and Public Trust,” 30–58, 2014. https://doi.org/10.4337/9781781007945.00008.

  • Fahim, Md Abdullah Al, Mohammad Maifi Hasan Khan, Theodore Jensen, Yusuf Albayram, and Emil Coman. “Do Integral Emotions Affect Trust? The Mediating Effect of Emotions on Trust in the Context of Human-Agent Interaction.” In Proceedings of the 2021 ACM Designing Interactive Systems Conference, 1492–1503. DIS ’21. New York, NY, USA: Association for Computing Machinery, 2021. https://doi.org/10.1145/3461778.3461997.

  • Fan, Xiaocong, Sooyoung Oh, Michael McNeese, John Yen, Haydee Cuevas, Laura Strater, and Mica R. Endsley. “The Influence of Agent Reliability on Trust in Human-Agent Collaboration.” In Proceedings of the 15th European Conference on Cognitive Ergonomics: The Ergonomics of Cool Interaction, 1–8. ECCE ’08. New York, NY, USA: Association for Computing Machinery, 2008. https://doi.org/10.1145/1473018.1473028.

  • Foláyan, Morẹ́nikẹ́ Oluwátóyìn, and Bridget Haire. “What’s Trust Got to Do with Research: Why Not Accountability?” Frontiers in Research Metrics and Analytics 8 (November 13, 2023): 1237742. https://doi.org/10.3389/frma.2023.1237742.

  • Heald, David. “Transparency-Generated Trust: The Problematic Theorization of Public Audit.” Financial Accountability & Management 34, no. 4 (2018): 317–35. https://doi.org/10.1111/faam.12175.

  • Hoff, Kevin Anthony, and Masooda Bashir. “Trust in Automation: Integrating Empirical Evidence on Factors That Influence Trust.” Human Factors 57, no. 3 (May 1, 2015): 407–34. https://doi.org/10.1177/0018720814547570.

  • Jøsang, Audun, and Stéphane Lo Presti. “Analysing the Relationship between Risk and Trust.” In Trust Management, edited by Christian Jensen, Stefan Poslad, and Theo Dimitrakos, 135–45. Lecture Notes in Computer Science. Berlin, Heidelberg: Springer, 2004. https://doi.org/10.1007/978-3-540-24747-0_11.

  • Kaltenbach1, Elizabeth, and Igor Dolgov. “On the Dual Nature of Transparency and Reliability: Rethinking Factors That Shape Trust in Automation.” Proceedings of the Human Factors and Ergonomics Society Annual Meeting 61, no. 1 (September 1, 2017): 308–12. https://doi.org/10.1177/1541931213601558.

  • Koenig, Melissa A., and Paul L. Harris. “The Basis of Epistemic Trust: Reliable Testimony or Reliable Sources?” Episteme 4, no. 3 (October 2007): 264–84. https://doi.org/10.3366/E1742360007000081.

  • Kwan, David, Luiz Marcio Cysneiros, and Julio Cesar Sampaio do Prado Leite. “Towards Achieving Trust Through Transparency and Ethics.” In 2021 IEEE 29th International Requirements Engineering Conference (RE), 82–93, 2021. https://doi.org/10.1109/RE51729.2021.00015.

  • O’Brien, Bridget C. “Do You See What I See? Reflections on the Relationship Between Transparency and Trust.” Academic Medicine 94, no. 6 (June 2019): 757. https://doi.org/10.1097/ACM.0000000000002710.

  • O’Hara, Kieron. “Transparency, Open Data and Trust in Government: Shaping the Infosphere.” In Proceedings of the 4th Annual ACM Web Science Conference, 223–32. WebSci ’12. New York, NY, USA: Association for Computing Machinery, 2012. https://doi.org/10.1145/2380718.2380747.

  • Pearson, Carl J., Allaire K. Welk, William A. Boettcher, Roger C. Mayer, Sean Streck, Joseph M. Simons-Rudolph, and Christopher B. Mayhorn. “Differences in Trust between Human and Automated Decision Aids.” In Proceedings of the Symposium and Bootcamp on the Science of Security, 95–98. HotSos ’16. New York, NY, USA: Association for Computing Machinery, 2016. https://doi.org/10.1145/2898375.2898385.

  • Roelofs, Portia. “Transparency and Mistrust: Who or What Should Be Made Transparent?” Governance 32, no. 3 (2019): 565–80. https://doi.org/10.1111/gove.12402.

  • Thakor, Richard T, and Robert C Merton. “Trust, Transparency, and Complexity.” The Review of Financial Studies 36, no. 8 (August 1, 2023): 3213–56. https://doi.org/10.1093/rfs/hhad011.

  • Walker, Kristen L. “Surrendering Information through the Looking Glass: Transparency, Trust, and Protection.” Journal of Public Policy & Marketing 35, no. 1 (April 1, 2016): 144–58. https://doi.org/10.1509/jppm.15.020.

  • Zerilli, John, Umang Bhatt, and Adrian Weller. “How Transparency Modulates Trust in Artificial Intelligence.” Patterns3, no. 4 (April 2022): 100455. https://doi.org/10.1016/j.patter.2022.100455.

361 views0 comments

Recent Posts

See All
bottom of page