top of page
  • Writer's pictureJessica Morley

2024 Weeknotes #3


A busy week this week, slowly getting into the swing of things in a new job, new university, new country etc.! And my first week officially being able to refer to myself as 'Dr' - the novelty of which I hope never wears off. A particular highlight this week was a presentation led by Joyce Guo - currently an MPP student at the Yale Jackson School of global affairs - on her and her team's work on the complex relationship between AI and sustainability. I won't say more as it's not my work, but I will say that it was excellent and I am really looking forward to seeing how the research progresses. I also spent a lot of time practicing Italian on Duolingo this week - apparently I now know 243 words - which has been a lot of fun. I think, as an academic, the most useful phrase I have learned thus far is probably "vorrei un caffé con latte per favore." Everyone knows research papers are fueled by blood, sweat, and caffeine.

Ok, now on with the weeknotes!

Things I worked on

  • App Store Audit. As mentioned in the previous two weeknotes, we are currently working on a paper auditing the evidence available to support the efficacy claims of so-called interventional apps on the iOS App Store. Thus far we have found that it is most common for apps to target mental health; about half of the available apps claim to be effective in some regard; most of these claims are extremely vague; the majority of apps have some 'evidence' to support these vague claims; almost all this evidence is of extremely low quality - mostly based on user statistics. It's going to be a great paper when it is finished.

  • AI & NHS values paper. The fifth chapter in my thesis "be careful what you wish for" is scenario-based exploration of what could go 'wrong' if the way in which algorithmic clinical decision support software 'AI' is implemented into the NHS is not carefully thought through. Specifically, it examines three scenarios in which the patient is displaced from the centre of care; the NHS is no longer committed to providing quality care; and the NHS ceases to be for all. It explains that these scenarios might occur due to: changes in what counts as knowledge about the body/evidence or absence of illness; changes to what counts as 'good' patient behavior; a loss of the right not to know; shifts in power dynamics; loss of trust and consequential damage to the ethics of care; loss of accountability; baked in existing bias; and new sources of bias. I am currently working on turning this chapter into a paper exploring how the NHS can maintain its commitment to its founding principles and core values even in an algorithmically-enhanced future.

  • Developing the concept of 21st century healthcare. Whilst I have been developing my postdoc programme of research, I have been developing the concept of 21st-century healthcare and what this actually means with the ever-increasing influx of digital technologies. It needs more work, but roughly I think it involves a shift from evidence-based medicine to algorithm-based medicine; from patient-centric care to 'digital-twin' centric care; from a 1:1 relationship to a many:many relationship; and from a model of narrow trust to a model of distributed trust. More on this in the (hopefully not-too-distant) future.

  • Ethics of digital mental health chapter. I'm currently working on a chapter for the forthcoming companion to digital ethics which is supposed to act as a textbook for undergraduates interested in digital ethics. I am responsible for writing the health and wellbeing chapter which uses mental health as an exemplar of the ethical risks posed by digital health interventions throughout. I am using the same approach as I use in my 'guide to thinking critically about AI for healthcare' and adapting it to show the different considerations attached to different technologies and to exemplify the considerations using mental health examples.

Things I did

  • Presented the outline of my research programme (which I've been working on since week 1 weeknotes of this year), including its theoretical underpinning, to the rest of the Digital Ethics Center (and some guests). It went pretty well, and now I can move forward with developing the concepts with a bit more confidence that I'm on the right track. More on this in a separate blog coming soon

  • Reviewed papers for BMC Medical Ethics, Artificial Intelligence Review, BMJ Leaders, Big Data and Society, and JAMA.

  • Pre-printed this paper on the need for the FDA to pay attention to the use of Artificial in Health Insurance, particularly for the purpose of pre-approvals, and to consider regulating it in the same way as software as a medical device is regulated. The paper was led by my excellent colleague Renée Sirbu and of course assisted by the brilliant Prof Luciano Floridi. The abstract is below.

Despite mounting enthusiasm regarding the introduction of artificial intelligence (AI) software as a medical device (SaMD) to clinical care and, consequently, the development of a new regulatory proposal for the federal oversight of AI/ML medical devices, little attention has been paid to the oversight of AI tools used by large insurers. The U.S. Food and Drug Administration (FDA) has advanced an “Action Plan” for clinical AI (CAI) governance. However, the U.S. healthcare system remains threatened by the unregulated application of insurance AI (IAI). In this article, we use IAI tools in the Medicare Advantage (MA) prior authorization pathway as an illustrative case to argue that these technologies require further regulatory attention by the FDA. Specifically, we propose a redefinition of “medical device” under the 21st Century Cures Act as necessarily inclusive of IAI and advance an actionable framework for FDA oversight in the approval of IAI tools for deployment by large healthcare insurers.

  • Published my whole doctoral thesis online via the Oxford Research Archive here. As I said above, I'm working on turning all the empirical chapters into individual papers that will hopefully get published independently, but it is still really exciting and satisfying to see the whole thing being made available for whoever would like to read it. Full abstract below:

Established in 1948, the National Health Service (NHS) has lasted 75 years. It is, however, under considerable strain: facing chronic staff shortages; record numbers of emergency attendances; an ambulance wait-time crisis; and more. Increasingly, policymakers are of the view that the solution to these problems is to rely more heavily on one of the NHS’s greatest resources: its data. It is hoped that by combining the NHS’s data riches with the latest techniques in artificial intelligence (AI), that the means to make the NHS more effective, more efficient, and more consistent, can be identified and acted upon via the implementation of Algorithmic Clinical Decision Support Software (ACDSS). Yet, getting this implementation right will be both technically and ethically difficult. It will require a careful re-design of the NHS’s information infrastructure to ensure the implementation of ACDSS results in intended positive emergence (benefits), and not unintended negative emergence (harms and risks). This then is the purpose of my thesis. I seek to help policymakers with this re-design process by answering the research question ‘What are the information infrastructure requirements for the successful implementation of ACDSS in the NHS?’. I adopt a mixed-methods, theory-informed, and interpretive approach, and weave the results into a narrative policy synthesis. I start with an analysis of why current attempts to implement ACDSS into the NHS’s information infrastructure are failing and what needs to change to increase the chances of success; anticipate what might happen if these changes are not made; identify the exact requirements for bringing forth the changes; explain why the likelihood of these requirements being met by current policy is limited; and conclude by explaining how the likelihood of policy meeting the identified requirements can be increased by designing the ACDSS’s supporting information infrastructure around the core concepts of ‘utility, usability, efficacy, and trustworthiness’.

Things I thought about

  • E-Trust. Whilst I've been developing the concept of 21st century healthcare as described above, I have been thinking a lot about what it is that is materially different about the relationships between different stakeholders or agents (including artificial agents) in the digital healthcare environment than between stakeholders/agents in an analogue healthcare environment, and why this disrupts models of trust. I think in large part it's about the verifiability of reliability and 'trustworthiness' of the different agents in the two different scenarios. In an analogue healthcare scenario the number of agents involved in one care pathway is relatively controlled and there are known mechanisms for verifying the trustworthiness of each 'node' or agent. In a digital healthcare scenario there are infinitely more agents (from those creating the data use to train models, to those curating it, to those building models etc.) and there are no known mechanisms for verifying the trustworthiness of each of these agents. In addition, some pre-existing agents (most notably clinicians) have new responsibilities in a digital health scenario and yet the verifiability mechanisms that exist for demonstrating their trustworthiness to complete 'traditional' tasks do not cover these new tasks. All this to say, I think a new theory of digital healthcare trust is required and to help me think this through I have been revisiting the theory of e-Trust developed by Profs Mariarosaria Taddeo and Luciano Floridi (see things I read) which is excellent and clear but doesn't (as of yet) quite stretch to the level of complexity involved in the digital health 'ecosystem' and so require expanding. FUN!

  • Power and policy influence. If you know me then you know that I love Taylor Swift. I also (obviously) have a very deep-seated interest in algorithmic governance. This week the two interests collided with Taylor Swift reportedly planning to sue in response to deep-faked 'porn' images of her being publicly released and spread on X. Deep-fakes of this nature have been an ethical issue for some time, and there have been numerous calls for their regulation that have largely fallen on deaf ears. However, with the involvement of Taylor Swift it now seems as though the calls for greater regulation are gaining traction. What is interesting to me about this is not the case in and of itself but that it exemplifies the extreme power imbalance that exists between those who have the potential to influence policy and regulation and those that do not. Taylor Swift is the most famous woman on the planet, for good reason, she is clearly a very talented musician and has an almost unparalleled mastery of digital marketing, but she is not an expert in algorithmic governance nor generative AI. It should not take social media platforms, regulators, and genAI developers being very publicly shamed by a person who has an extreme level of fame and influence to pay attention to an issue that harms (and has been harming) many far-less powerful individuals for an extended period of time. This implies that only the harms that occur to the very powerful in society 'matter' and those that only harm the 'everyday' person are irrelevant. How this power imbalance can be 'rectified' is something that has been occupying a lot of my free-thinking time this week.

  • Social media 'for good'. Social media often (and mostly rightly) gets portrayed in a negative light. The conversations in the media typically surround mis and disinformation and the public conversation around the role of social media in mental health is very much focused on the potential negative implications. These negatively-slanted conversations are necessary and important. However, the more I have been reading about (e.g.,) social media and mental health, the more obvious it becomes that this is really only telling a very small part of the story. Yes, social media can be highly damaging, but there is equal evidence that it can also be very beneficial - particularly to those who feel ostracised by IRL society. People can find connections, build social bonds, explore facets of their identity that they cannot IRL, and there are correspondingly many significant benefits to people's health including their emotional wellbeing and mental health. This is not new news as it were. But what I have been thinking about is how could the design of social media platforms be altered (or regulated) to make the benefits more easily achievable and the disadvantages harder to surface. Not a question with an easy solution, but one worth considering.

(A selection of) Things I read

The highlighted papers are those I particularly enjoyed.

  • Elkhazeen, Abu, Chris Poulos, Xin Zhang, John Cavanaugh, and Matthew Cain. “A TikTok TM ‘Benadryl Challenge’ Death—A Case Report and Review of the Literature.” Journal of Forensic Sciences 68, no. 1 (January 2023): 339–42.

  • Farnood, Annabel. “The Effects of Online Self-Diagnosis and Health Information Seeking on the Patient-Healthcare Professional Relationship,” 2021.

  • Hornstein, Silvan, Kirsten Zantvoort, Ulrike Lueken, Burkhardt Funk, and Kevin Hilbert. “Personalization Strategies in Digital Mental Health Interventions: A Systematic Review and Conceptual Framework for Depressive Symptoms.” Frontiers in Digital Health 5 (2023): 1170002.

  • Larsen, Mark Erik, Jennifer Nicholas, and Helen Christensen. “Quantifying App Store Dynamics: Longitudinal Tracking of Mental Health Apps.” JMIR mHealth and uHealth 4, no. 3 (2016): e6020.

  • Lau, Nancy, Alison O’Daffer, Susannah Colt, Joyce P Yi-Frazier, Tonya M Palermo, Elizabeth McCauley, and Abby R Rosenberg. “Android and iPhone Mobile Apps for Psychosocial Wellness and Stress Management: Systematic Search in App Stores and Literature Review.” JMIR mHealth and uHealth 8, no. 5 (May 22, 2020): e17798.

  • Levy, J., and N. Romo-Avilés. “‘a Good Little Tool to Get to Know Yourself a Bit Better’: A Qualitative Study on Users’ Experiences of App-Supported Menstrual Tracking in Europe.” BMC Public Health 19, no. 1 (2019).

  • Licinio, Julio, and Ma-Li Wong. “Digital Footprints as a New Translational Approach for Mental Health Care: A Commentary.” Discover Mental Health3, no. 1 (2023): 5.

  • Martínez-Castaño, R., J.C. Pichel, and D.E. Losada. “A Big Data Platform for Real Time Analysis of Signs of Depression in Social Media.” International Journal of Environmental Research and Public Health 17, no. 13 (2020): 1–23.

  • Onnela, Jukka-Pekka, and Scott L Rauch. “Harnessing Smartphone-Based Digital Phenotyping to Enhance Behavioral and Mental Health.” Neuropsychopharmacology 41, no. 7 (2016): 1691–96.

  • Pagoto, Sherry, Molly E Waring, and Ran Xu. “A Call for a Public Health Agenda for Social Media Research.” Journal of Medical Internet Research 21, no. 12 (2019): e16661.

  • Paripoorani, Deborah, Norina Gasteiger, Helen Hawley-Hague, and Dawn Dowding. “A Systematic Review of Menopause Apps with an Emphasis on Osteoporosis.” BMC Women’s Health 23, no. 1 (September 29, 2023): 518.

  • Peven, Kimberly, Aidan P Wickham, Octavia Wilks, Yusuf C Kaplan, Andrei Marhol, Saddif Ahmed, Ryan Bamford, et al. “Assessment of a Digital Symptom Checker Tool’s Accuracy in Suggesting Reproductive Health Conditions: Clinical Vignettes Study.” JMIR mHealth and uHealth 11 (December 5, 2023): e46718.

  • Qin, L., X. Zhang, A. Wu, J.S. Miser, Y.-L. Liu, J.C. Hsu, B.-C. Shia, and L. Ye. “Association between Social Media Use and Cancer Screening Awareness and Behavior for People without a Cancer Diagnosis: Matched Cohort Study.” Journal of Medical Internet Research 23, no. 8 (2021).

  • Ramos, Giovanni, Carolyn Ponting, Jerome P Labao, and Kunmi Sobowale. “Considerations of Diversity, Equity, and Inclusion in Mental Health Apps: A Scoping Review of Evaluation Frameworks.” Behaviour Research and Therapy 147 (2021): 103990.

  • Rodgers, Rachel F., Amy Slater, Chloe S. Gordon, Siân A. McLean, Hannah K. Jarman, and Susan J. Paxton. “A Biopsychosocial Model of Social Media Use and Body Image Concerns, Disordered Eating, and Muscle-Building Behaviors among Adolescent Girls and Boys.” Journal of Youth and Adolescence 49, no. 2 (February 2020): 399–409.

  • Stawarz, K., C. Preist, and D. Coyle. “Use of Smartphone Apps, Social Media, and Web-Based Resources to Support Mental Health and Well-Being: Online Survey.” JMIR Mental Health 6, no. 7 (2019).

  • Stoody, Vishvanie B, Hannah R Glick, Annie C Murphey, Julie M Sturza, and Ellen M Selkie. “A Content Analysis of Transgender Health and Wellness Themes Shared Through Social Media.” Clinical Pediatrics, 2023, 00099228231219499.

  • Su, Zhaoyuan, Mayara Costa Figueiredo, Jueun Jo, Kai Zheng, and Yunan Chen. “Analyzing Description, User Understanding and Expectations of AI in Mobile Health Applications.” AMIA ... Annual Symposium Proceedings. AMIA Symposium 2020 (2020): 1170–79.

  • Taddeo, Mariarosaria, and Luciano Floridi. “The Case for E-Trust.” Ethics and Information Technology 13, no. 1 (March 2011): 1–3.

  • Taddeo, Mariarosaria. “Defining Trust and E-Trust: From Old Theories to New Problems.” International Journal of Technology and Human Interaction 5, no. 2 (April 1, 2009): 23–35.

  • Taddeo, Mariarosaria. “Modelling Trust in Artificial Agents, A First Step Toward the Analysis of e-Trust.” Minds and Machines 20, no. 2 (July 2010): 243–57.

  • Triptow, Christina, Jason Freeman, Paige Lee, and Thomas Robinson. “# HealthyLifestyle: AQ Methodology Analysis of Why Young Adults like to Use Social Media to Access Health Information.” Journal of Health Psychology, 2023, 13591053231200690.

  • Ulvi, Osman, Ajlina Karamehic-Muratovic, Mahdi Baghbanzadeh, Ateka Bashir, Jacob Smith, and Ubydul Haque. “Social Media Use and Mental Health: A Global Analysis.” Epidemiologia 3, no. 1 (2022): 11–25.

  • Valentine, Lee, Simon D’Alfonso, and Reeva Lederman. “Recommender Systems for Mental Health Apps: Advantages and Ethical Challenges.” AI & Society 38, no. 4 (2023): 1627–38.

  • Valentine, Lee, Carla McEnery, Simon D’Alfonso, Jess Phillips, Eleanor Bailey, and Mario Alvarez-Jimenez. “Harnessing the Potential of Social Media to Develop the next Generation of Digital Health Treatments in Youth Mental Health.” Current Treatment Options in Psychiatry 6 (2019): 325–36.

  • Valkenburg, Patti M, Adrian Meier, and Ine Beyens. “Social Media Use and Its Impact on Adolescent Mental Health: An Umbrella Review of the Evidence.” Current Opinion in Psychology 44 (2022): 58–68.

  • Vaterlaus, J Mitchell, Emily V Patten, Cesia Roche, and Jimmy A Young. “# Gettinghealthy: The Perceived Influence of Social Media on Young Adult Health Behaviors.” Computers in Human Behavior 45 (2015): 151–57.

  • Wiederhold, Brenda K. “Social Media and Mental Health: Weighing the Costs and Benefits.” Cyberpsychology, Behavior, and Social Networking 24, no. 12 (December 1, 2021): 775–76.

  • Xie, Z., H. Liu, and C. Or. “A Discrete Choice Experiment to Examine the Factors Influencing Consumers’ Willingness to Purchase Health Apps.” mHealth 9 (2023).

  • Zhang, Alice Qian, Ashlee Milton, and Stevie Chancellor. “# Pragmatic Or# Clinical: Analyzing TikTok Mental Health Videos,” 149–53, 2023.

  • Zwingerman, Rhonda, Michael Chaikof, and Claire Jones. “A Critical Appraisal of Fertility and Menstrual Tracking Apps for the iPhone.” Journal of Obstetrics and Gynaecology Canada 42, no. 5 (May 2020): 583–90.

102 views0 comments

Recent Posts

See All


bottom of page