Scroll across for all my first-authored papers. The title will link you to the full paper, whilst the abstract for each is in the description. Scroll further down for co-authored papers, reports and external blogs.
Background:Although advanced analytical techniques falling under the umbrella heading of artificial intelligence (AI) may improve health care, the use of AI in health raises safety and ethical concerns. There are currently no internationally recognized governance mechanisms (policies, ethical standards, evaluation, and regulation) for developing and using AI technologies in health care. A lack of international consensus creates technical and social barriers to the use of health AI while potentially hampering market competition.
Objective:The aim of this study is to review current health data and AI governance mechanisms being developed or used by Global Digital Health Partnership (GDHP) member countries that commissioned this research, identify commonalities and gaps in approaches, identify examples of best practices, and understand the rationale for policies.
Methods:Data were collected through a scoping review of academic literature and a thematic analysis of policy documents published by selected GDHP member countries. The findings from this data collection and the literature were used to inform semistructured interviews with key senior policy makers from GDHP member countries exploring their countries’ experience of AI-driven technologies in health care and associated governance and inform a focus group with professionals working in international health and technology to discuss the themes and proposed policy recommendations. Policy recommendations were developed based on the aggregated research findings.
Results:As this is an empirical research paper, we primarily focused on reporting the results of the interviews and the focus group. Semistructured interviews (n=10) and a focus group (n=6) revealed 4 core areas for international collaborations: leadership and oversight, a whole systems approach covering the entire AI pipeline from data collection to model deployment and use, standards and regulatory processes, and engagement with stakeholders and the public. There was a broad range of maturity in health AI activity among the participants, with varying data infrastructure, application of standards across the AI life cycle, and strategic approaches to both development and deployment. A demand for further consistency at the international level and policies was identified to support a robust innovation pipeline. In total, 13 policy recommendations were developed to support GDHP member countries in overcoming core AI governance barriers and establishing common ground for international collaboration.
Conclusions:AI-driven technology research and development for health care outpaces the creation of supporting AI governance globally. International collaboration and coordination on AI governance for health care is needed to ensure coherent solutions and allow countries to support and benefit from each other’s work. International bodies and initiatives have a leading role to play in the international conversation, including the production of tools and sharing of practical approaches to the use of AI-driven technologies for health care.
Scroll across for all published papers where I am one of several authors. Scroll further down for reports and external blogs.
Data have been widely hailed as ‘the raw material of the 21st century’ and ‘better use of data’ is a central feature of the NHS Long Term Plan. Yet, data alone does not produce insights. To capitalise on opportunities to improve health and care, we need the data and outstanding data analysis. However, policymakers and academia have almost exclusively focused on pure academic research around the aetiology of disease; the field of practical coalface analytics has been largely neglect.
To address these concerns we set out to: (i) identify the technical, cultural and regulatory barriers to the better use of analysis; (ii) identify potential solutions to these barriers; (iii) frame these barriers and solutions as action statements in a standard format (‘specific person/organisation should do this specific thing so that this specific outcome can be achieved.’); (iv) outline what successful change would look like in the format of ‘we'll know we’ve won when’ statements.
Professor Ben Goldacre was commissioned by the government in February 2021 to review how to improve safety and security in the use of health data for research and analysis. The report, for which I was the lead researcher, makes 185 recommendations that would benefit patients and the healthcare sector.
The report is aimed at policy makers in the NHS and government, research funders and those who use the data for service planning, public health management and medical research. Patient representatives and the wider public may also have an interest in the report’s recommendations.
The report is informed by interviews, open sessions and deep dives with more than 100 stakeholders throughout academia and healthcare. The government’s response will be included in our upcoming health and social care data strategy, which was published in draft form in June 2021.
How to get it right, sets out the foundational policy work that was done to develop the plans for the NHS AI Lab. It outlines where in the system AI technologies can be used and the policy work that is, and will need to be done, to ensure the use of AI is safe, effective and ethical.