top of page
Search
  • Jessica Morley

Algorithmic-CDSS in the NHS: Designing for Success

This is an introduction to my PhD research "Designing An Algorithmically-Enhanced National Health Service: Towards a Conceptual Model For the Successful Development, Deployment and Use of Algorithmic-Clinical Decision Support Software in the NHS" at some point I'll write this up into a paper.


A beloved institution in trouble

In 1946, as Britain began to recover from the impact of the second world war, Clement Attlee’s new Labour Government passed the National Health Service (NHS) Act, and so set out its intention to establish a publicly funded “comprehensive health service to secure the improvement in the physical and mental health of the people….and the prevention, diagnosis, and treatment of illness.” Two years later, on July 5th 1948, the new NHS came into being, designed to: meet the needs of everyone; be free at the point of delivery; and be based on clinical need rather than the ability to pay (Rivett, 1998). More than seven decades later, the NHS still strives to abide by these core principles (Young, 2017). The NHS Constitution, for instance, is still based on the same underlying values: everyone counts; compassion; respect and dignity; improving lives; for patients; and a commitment to quality care (Department of Health and Social Care, 2015). Admirable though this enduring commitment to providing free high-quality care to all in support of equality and the right to life might be, it is increasingly difficult to meet.


21st century NHS patients are vastly different to mid-20th century patients; they are significantly more likely to be over the age of sixty-five and to require long-term care for multiple chronic comorbidities (Young, 2017). This makes providing comprehensive care considerably more complex, with clinicians needing to be able to work across multiple disciplines and choose between a range of treatment options to achieve the perfect balance between survival rate and quality of life. To a certain extent the fact that this ‘problem’ exists is positive; it demonstrates that the NHS’s ‘war on disease’ has been largely successful. However, it also presents policymakers and healthcare commissioners with two significant challenges. First, as Heckman et al. (2020) note, complexity of this degree can exceed the cognitive capacity of clinicians, increasing the likelihood of error and variation in the quality of care, and so potentially exposing patients to great risk of harm. Second, it drastically increases the cost of care. Today, more than 70% of the costs of care are attributable to the management of chronic disease (Young, 2017). This puts extreme pressure on NHS budgets, and leaves many NHS organisations struggling to stay ‘in the black.’ Indeed, at the end of the 2019/2020 financial year (before the impact of the COVID-19 pandemic was felt), the net deficit of the NHS was more than £910 million (The Kings Fund, 2021),

Thus, in many ways, the NHS finds itself today a victim of its own success, with the existence of a large and ageing population used to accessing world-class healthcare for free, threatening both its ability to stay true to its core values in the short-term and its sustainability in the long-term. To tackle these issues policymakers, have, over the years, made multiple attempts to:


a) rein in demand and bring costs back under control, for example, introducing charges for prescriptions, dentistry, and ophthalmology (Rivett, 1998), shifting care out of hospitals, and redesigning contracts with NHS providers; and

b) reduce complexity, for example, by standardising treatment plans and pathways in accordance with the principles of evidence-based medicine (the “conscious, explicit, and judicious use of current best evidence in making decisions about the care of individual patients” (Sackett et al., 1996).


None of these attempts have been particularly successful, most often they have simply shifted the burden of debt to different parts of the healthcare system – much like sweeping dust under the carpet – meanwhile demand and expectations have continued to increase, whilst healthcare outcomes have begun to decline (Papanicolas et al., 2019). It has, therefore, been suggested that the NHS needs to fundamentally change its operating model if it wishes to continue to provide nationally accessible, effective, and high-quality care, in a sustainable manner.


According to Balicer & Cohen-Stavi (2020) achieving such a shift in its modus operandi will require the NHS to: start proactively anticipating future healthcare demands; move to a model of preventive care; and provide targeted ‘personalised’ treatment. All this relies on the NHS’s ability to make better use of data analytics to understand exactly how and why diseases arise and how they can most effectively be predicted, prevented and treated (Castle-Clarke, 2018). In short it is argued that to provide 21st century patients with 21st century care, the NHS will have to embrace the opportunities presented by ‘big data’ (Cahan et al., 2019) for what is commonly termed, ‘P4’ medicine.


The promise of P4 Medicine

‘P4’ medicine refers to care that is predictive, preventative, personalised and participatory and relies on the analysis of big patient-specific data (data derived from genomics, phenomics, electronic health records, or information about a person’s environment) to make changes to the management of public health at the macro level and to the management of individual patients at the micro level. More specifically: at the macro scale P4 medicine aims to stratify the population according to environmental, genetic, or lifestyle factors that most commonly affect the health outcomes of different groups; and at the micro scale, P4 medicine aims to develop a deep understanding of an individual person’s health to enable more accurate risk prediction and more effective prevention (Green & Vogt, 2016). It is the grand hope of policymakers that adopting these data-driven strategies will help the NHS achieve the so-called ‘triple aim’: improved quality of care, improved experience of care, and reduced per capita cost (Berwick et al., 2008).


Although in policy documents descriptions of the analytic techniques underlying P4 medicine, especially those that fall under the umbrella heading of Artificial Intelligence (AI), are often couched in terms such as ‘emerging,’ ‘novel’ or ‘innovative,’ most are anything but new. The potential efficiency and quality-boosting power of better information management was recognised by NHS policymakers as early as 1960 (McClenahan, 2000); diagnosis and treatment planning became a major focus of AI research in the 1970s (Hollis et al., 2019); and hopes accelerated when 96% of NHS primary care records were fully digitised by 1996 (McMillan et al., 2018). However, in the past the dreams of policymakers, commissioners, medical professionals and patients alike have been dashed by the challenges presented by the dimensionality and complexity of health data for standard statistical analysis (Thesmar et al., 2019); limitations of rule-based AI systems (Davenport & Kalakota, 2019); and other problems related to scalability, applicability, and acceptance in medical research and clinical practice (Abidi & Abidi, 2019). It is, therefore, only relatively recently that enthusiasm for data-driven P4 medicine has returned to the NHS.


In part this revival of interest can be attributed to the influence of high-profile advocates such as Dr. Robert Wachter and Dr. Eric Topol who were commissioned to advise the NHS on the better use of data-driven technologies in 2016 and 2019 respectively (Topol, 2019; Wachter, 2016). However, from a more technical perspective, the recent resurgence in enthusiasm can be attributed to: a) a broadening of the definition of health data beyond data generated in ‘clinic’ and the consequential integration of heterogenous data (Abidi & Abidi, 2019); and b) developments in machine learning techniques that can infer previously undetected patterns from patient data (Buch et al., 2018). Data scientists have convinced policymakers that, combined, these two developments will lead to previously unprecedented accuracy in personalised predictions related to diagnostics, prognostics, and treatment (Blease et al., 2019). Consequently, hope in the ability of P4 medicine to deliver on the triple aim for the NHS has been revised. Indeed, the extent to which policymakers are pinning their hopes on the power of data has been made abundantly clear in the last three years with promises to use data and AI for preventive and proactive care featuring prominently in the NHS long term plan (Alderwick & Dixon, 2019; NHS England, 2019); the launch of the Government’s AI Mission to ‘use data, Artificial Intelligence, and innovation to transform the prevention, early diagnosis and treatment of chronic diseases’ (Department for Business, Energy & Industrial Strategy, 2018) and the creation of the £250million NHS AI Lab intended to support the development of techniques that will reduce clinicians’ workloads and improve productivity (Steventon et al., 2019). Thus, it seems unlikely that this renewed enthusiasm will abate any time soon.


Clinical Decision Support: Hope and Hype

It is estimated that, as a whole, the NHS manages approximately 50 billion rows of patient data. Add to this the fact that the Government is known to take a technologically deterministic approach to innovation in medicine - viewing it as an unproblematic means to free patients from the constraints imposed by individual biology (Lock & Nguyen, 2018) -, and it is perhaps unsurprising that the NHS is currently pinning so much of its hopes of a sustainable future on the alleged transformative power of P4 medicine. Indeed, when this strategy is narrowed to the wider adoption of clinical decision support software (CDSS) across the NHS, it can – to those that adopt an attitude of ‘technological somnambulism’ (or unreflective acceptance) (Lock & Nguyen, 2018) – seem relatively plausible, perhaps even preferable to mooted alternatives of restricting access or cutting the workforce.


At its most basic, CDSS is designed to act as a knowledge-base or a rules-based expert system that aids clinical decision making by ensuring clinicians have the right information at the right time and can readily make comparisons to previously encountered cases when attempting to diagnose new patients (Reisman, 1996). This type of CDSS came into being in the 1970s when MYCIN was developed at Stanford for diagnosing blood-borne bacterial infections (Davenport & Kalakota, 2019). It essentially relies on the idea that human expertise about medical care can be reduced to ‘if-then’ statements describing the relationship between ‘antecedent’ (symptoms) and ‘consequent’ (diagnosis) variables (Clarke, 2019b). These statements get built into the CDSS as rules so that when it is supplied with new data, it can apply the rules to that data and present relevant knowledge to the clinician, who is then responsible for sorting and interpreting the presented knowledge (Eberhardt et al., 2012). Although, this early kind of CDSS showed promise, it was rarely adopted by practicing NHS clinicians, and was more often used for teaching (Hollis et al., 2019). There are multiple reasons for this lack of adoption and use, including the fact that early CDSS rarely performed better than clinicians and so did little to reduce medical error, but one of the main reasons basic rules-based CDSS has largely failed to impact care is that it is difficult to keep them up-to-date as medical knowledge changes and most have been unable to handle the significant increase in data availability (Davenport & Kalakota, 2019).


Algorithmic approaches to CDSS (henceforth ACDSS), in particular those reliant on Machine Learning techniques such as neural networks, can hypothetically overcome these particular limitations. ACDSS approaches do not necessarily begin with the manual modelling of previously known relationships between variables (Clarke, 2019b). Instead, they are designed to extract these rules from large volumes of patient data (the training dataset initially) and use the ‘learned’ information to make real-time inferences related to risk and potential outcomes (Burrell, 2016; Jiang et al., 2017). Put simply, whereas traditional CDSS was designed to be presentational, ACDSS is designed to be inferential (Eberhardt et al., 2012). This change in approach means that ACDSS is significantly more flexible than traditional CDSS. It can be developed for almost any medical problem, taking in a wide variety of patient data types – including both structured and unstructured data (Sood & McNeil, 2017)– and adapt as more data becomes available and attitudes regarding best practice shift (Eberhardt et al., 2012). Crucially, from the perspective of P4 medicine, ACDSS can be used for predictive and personalisation purposes. Early research has demonstrated that it can, for example, outperform human clinicians and traditional CDSS in predicting conditions such as cancer, cardiovascular disease, and diabetes, and is capable of recommending optimal treatments, inferring the status of a person’s health even when key measurements are missing, and refining treatment plans when the context changes (Reddy et al., 2019). Furthermore, horizon scanning reveals future ACDSS may be capable of autonomously triaging patients or prioritising access to care based on results from screening (Challen et al., 2019). It is for these reasons that most of the NHS hopes described above rest on the development, deployment, and use of ACDSS.


The challenge for policymakers is that, as remarked by Dr. Wachter, these hopes currently rest on ‘more promise than reality and more hype than evidence’ (Banerjee et al., 2018). The long-awaited ‘big data revolution’ keeps failing to materialise (Dhindsa et al., 2018) because whilst the potential of ACDSS has been widely reported, and features heavily in research literature, there are almost no studies demonstrating the positive impact of ACDSS on frontline care or on real-world costs (Kelly & Young, 2017). Panch et al. (2019) refer to this as the ‘inconvenient truth,’ whilst Seneviratne et al. (2020) call this implementation gap ‘the elephant in the room’ noting the current lifecycle of an ACDSS algorithm is: train on historical data; publish a paper demonstrating highly accurate results; and then collect dust in the ‘model graveyard.’ As such faith in the ability of ACDSS to bring about the much-needed shift in the NHS’s operational model is waning and scepticism is growing. The next section considers the reasons for this growing scepticism in more detail.


ACDSS: doomed to fail?

As Greenhalgh & Papoutsi (2019) explain, there are common sense reasons why implementing complex innovations, such as ACDSS, in a healthcare system as complex as the NHS is hard; it takes hard work; involves spending money and diverting staff from other potentially more important clinical tasks; requires the shifting of cultural and professional norms; and taking risks that might lack support. Furthermore, the NHS has an incredibly poor track record of implementing information technology, and consistently fails to assess the benefits, feasibility, and challenges of implementing new systems (Castle-Clarke, 2018). This can be demonstrated by looking considering the history of the National Programme for IT (NPfIT), Care.Data, and the deployment of DeepMind’s ‘Streams’ app in the NHS Royal Free Hospital.


National Programme for IT

The National Programme for IT was launched by Tony Blair’s Labour Government in 2002. Described as the ‘world’s largest civil IT programme,’ it had an initial budget of £6.2billion (Justinia, 2017). Initially intended to run for two years, and nine months from April 2003, it was intended to bring NHS information infrastructure into the 21st century, by enabling the joining up of information systems and datasets for use in direct care, so that patient outcomes could be improved and more sustainable business models could be developed (Sood & McNeil, 2017). Policymakers hoped that by 2006 the NHS would be largely paperless, with almost all administrative tasks from booking appointments to transferring prescriptions from GPs to local pharmacies being completed electronically, and that patients would only have to ‘tell their story once’ because all previously recorded information would be available to any clinician anywhere with the touch of a button (Greenhalgh & Keen, 2013). These hopes were never realised, however. From the start the programme was dogged by delays, resistance from frontline clinicians, and spiralling costs. In short, the programme was not trusted by doctors given that it appeared to have no impact on patient safety and private contractors (notably BT) failed to deliver either on time or within budget (Justinia, 2017). By May 2011, more than five years after the programme was supposed to have concluded, the National Audit Office (NAO) suggested that completion of the still-promised upgrades to the NHS’s information infrastructure would cost the taxpayer more than £11.4 billion, leading the head of the NAO to refer to it as ‘yet another example of a department fundamentally underestimating the scale and complexity of a major IT-enabled change programme’ and recommending that it be shut down (Great Britain & National Audit Office, 2011). This prompted the Cabinet Office’s Major Projects Authority to conduct an official review into the project. This review concluded in September 2011, finding that the NPfiT was not ‘fit to provide the modern IT services that the NHS needed’ and so dismantled the programme (DHSC, 2011)


Care.Data

‘Care.Data’ was a project launched by the Health and Social Care Information Centre (HSCIC – now known as NHS Digital) in 2013. The intention was to extract data from all GP records and hospitals, link it, de-identify it by removing names and addresses, and store it centrally inside a ‘safe haven’ so it could be made available for researchers and commissioners responsible for planning health services (NHS England, 2013). The HSCIC had a legal basis for doing this, set out in the Health and Social Care Act 2012, and it already extracted GP records via the General Practice Extraction Service (GPES) for various purposes including monitoring NHS activity (Laurie et al., 2015). However, this legal basis proved insufficient for the HSCIC to gain the trust and confidence of patients, citizens and healthcare professionals (van Staa et al., 2016). The ‘information and engagement’ strategy of delivering leaflets to all households in England failed to adequately inform the public; hundreds of people claimed not to have seen the leaflet; and it did not set out how data would be protected, nor who would have access to it under what conditions (Hays & Daker-White, 2015). Furthermore, privacy campaigners accurately pointed out that policymakers were overstating the ability of the HSCIC to make the data ‘anonymous’ through de-identification processes, and alerted the public to the fact that their data would still be vulnerable to re-identification – especially as technology developed (Presser et al., 2015). In short the programme failed to gain the necessary ‘social licence’ to go ahead (Carter et al., 2015) and was shut down in July 2016 (Limb, 2016).


‘Streams:’ DeepMind and the NHS Royal Free Hospital

Streams was an app designed by Google’s London-based subsidiary DeepMind, in partnership with the NHS Royal Free Hospital, to present clinicians with key information that would make it easier for them to recognise the early signs of Acute Kidney Injury and so intervene – i.e., it was designed as a presentational form of CDSS. To enable DeepMind to develop the app, the Royal Free transferred 1.6million patient records to the Google affiliate. As the hospital believed that because the app was to be used for ‘direct care purposes’ patient consent was not needed (Rumbold & Pierscionek, 2017), it failed to publicly disclose any of the details of this arrangement (Powles & Hodson, 2017). This was viewed as being a significant infringement of patient rights, and thus damaging to patient autonomy, leading to the details of the partnership being investigated by the Information Commissioner’s Office (ICO). Following its investigation, the ICO ruled that the Royal Free had failed to comply with data protection law – specifically it failed to comply with the common law duty of confidence – as patients would not have expected their data to be used by the hospital in that manner (ICO, 2017). Although this ruling did not result in the project begin shut down – though it was required to establish a legal basis under the Data Protection Act – it did lead to a significant ‘trust deficit’ and is viewed as having significantly set back the development of CDSS – especially ACDSS – in the NHS (Shah, 2017).


Combined these three examples of past NHS data projects demonstrate that policymakers frequently fail to understand the ethical, legal, technical, or social complexities involved in such projects, significantly undermining their chances of success. It is not, therefore, implausible to believe that the promised benefits of ACDSS may too be based on flawed assumptions (Grote & Berens, 2020). If this is indeed the case, then the growing scepticism regarding the likely success of the NHS’s use of ACDSS is justified especially when it is considered that ACDSS introduces a number of ‘special challenges,’ related to epistemic and normative trade-offs, that policymakers will not have previously encountered (Martin et al., 2019; Xiao et al., 2018). These trade-offs are discussed in the next section.


Additional complexity

To realise the full potential of ACDSS for the NHS, policymakers will have to make a series of strategic decisions related to competing interests and values and how these will be balanced. This will necessarily involve trade-offs at both the epistemic and normative levels related to a broad range of issues from privacy to discrimination (Cohen et al., 2020). The concern is that if policymakers cannot identify how the thresholds for these trade-offs should be determined (Riso et al., 2017), then the deployment and use of ACDSS in the NHS is not only likely to fail, but likely to do so in an ethically harmful way (Grote & Berens, 2020). Of particular relevance in this regard, are the trade-offs between individual privacy and the public good, and between accuracy and accountability


Individual privacy vs. the public good

Identifiable health data is arguably the most sensitive personal data that exists, its unintentional release can undermine personal dignity, cause embarrassment, result in financial harm, and more (Heitmueller et al., 2014). The governance framework dictating how NHS patient data must be managed (stored, accessed, and processed) and what it can be used for, is thus designed to prioritise the protection of individual patient privacy. The ‘Caldicott Principles’ which provide the normative justification for most NHS data protection rules are, for example, balanced seven-to-one in favour of limiting access to and use of NHS patient data (The National Data Guardian, 2020). Individual privacy is not, however, an absolute right. If providing wider access to NHS patient data for research purposes, including for the purpose of developing ACDSS, can result in better care for all, then it is arguably irresponsible for the NHS not to allow the use of information from one patient to help another (Goodman, 2020) – particularly when the previously outlined values of the NHS constitution are considered. Thus, it is argued, health data generated in a public health system should be used as a public resource (Ballantyne & Schaefer, 2018), with the societal benefit of enabling research using NHS patient data given equal ethical weight as individual privacy (Stockdale et al., 2019). To a certain extent this concept is enshrined in data protection law. The UK Data Protection Act, for example, allows the derogation of individual rights to privacy if the application of them would seriously impair the achievement of scientific research, and when combined the 2006 NHS Act and the 2002 Health Service (Control of Patient Information) Regulations make clear the common law of confidentiality (which mandates consent) can be overridden to support public benefit (Mészáros & Ho, 2018). However, the fall-out from care.data, described above, demonstrates that having a legal basis does not always guarantee public support – particularly when the link between the use of patient records and the specific benefits delivered is not always visible (Stevenson, 2015).


Accuracy vs. Accountability

As discussed above, traditional CDSS presented the information to clinicians who were then responsible for interpreting the information, be it a diagnosis or prognosis, and acting upon it. Although, potentially useful for acting as an aide memoire, this type of CDSS was found to have limited utility because its performance – in terms of diagnostic accuracy, for example – was rarely greater than that of clinicians and it was difficult to keep it up-to-date. ACDSS, in contrast, aims to infer from patient data and provide a suggested diagnosis, prognosis or treatment plan directly to the clinician, and is relatively flexible both in terms of applicability and adaptability. ACDSS, particularly that built using machine learning, thus has considerable benefits in terms of plasticity and accuracy. However, these added benefits come at a cost: transparency, interpretability, or most-commonly ‘explainability.’


Although not all ACDSS is based on algorithms that are ‘black-box’ in nature, black-box algorithms offer NHS clinicians the greatest opportunity in terms of performance improvement, as they ae most likely to provide insights that were previously unknown. In this way ACDSS running on black-box algorithms privileges immediate patient benefit over the ‘understanding’ of the clinician (Price, 2018). This is potentially problematic on a number of fronts: it challenges clinicians’ ability to offer patients a sufficiently detailed for them to give informed consent for treatment (see for example: Blasimme & Vayena, 2019; Findley et al., 2020); it potentially undermines the clinicians ability to act as a ‘learned intermediary’ which, in turn, can undermine the extent to which they are fulfilling their fiduciary duty to the patient (see for example: Mittelstadt & Floridi, 2016; Price & Cohen, 2019; Sharpe, 1997); it can induce ‘automation bias’ and make the clinician less likely to question a result that looks unusual (Cabitza et al., 2017; Magrabi et al., 2019); it makes it difficult to verify the information upon which the algorithm was built – concerning when most ACDSS algorithms are built by those outside of the medical profession (Waring et al., 2020); and, finally, the complexity of the model (and the steps involved in developing the model) can make it difficult to identify sources of error – challenging existing interpretations of medical liability (see for example: He et al., 2019; Reed, 2018; the Precise4Q consortium et al., 2020; Vayena et al., 2018). Identifying the optimum balance between these competing factors (the need for accuracy and performance, and the need for explainability) will be essential for ensuring trust in the use of ACDSS in the NHS (Vollmer et al., 2020), and yet is likely to be extremely challenging for policymakers given the range of different opinions on the topic expressed by different stakeholders.


Though significant, these are just two examples of the additional complexities facing policymakers looking to encourage the development, deployment and use of ACDSS for the NHS – there are almost certainly many others considering the complexity of the medical domain (Miotto et al., 2018). When knowledge of these additional complexities is added to the knowledge of the NHS’s history of failed data transformation projects, it is easy to see why the breakthroughs promised by ACDSS are still largely unrealised (Norgeot et al., 2019). Indeed, the failure to deliver feels almost inevitable. This raises the question, is the NHS’s strategy to almost relentlessly pursue an ACDSS-enabled shift to P4 medicine based purely on mythology (boyd & Crawford, 2012), innovation bias (Greenhalgh, 2013), and a belief in the power of magic bullets (Janes & Corbett, 2009)?


This type of ‘magical thinking’ being behind the NHS’s seemingly sudden interest in the power of ACDSS is not outside the realm of possibility. As Dixon-Woods et al. (2011) point out, it has been behind the drive to increase NHS uptake of multiple ethically dubious interventions in the past. Policymakers it seems, have historically been reluctant to pass up any chances, no matter how slim, to achieve the triple aim. There is an argument that makes this ‘give it a go’ attitude understandable – perhaps even justifiable – that is, if any health information technology has even the smallest potential to improve health, then this places an ethical duty on the NHS to invest in its use for this purpose (Goodman, 2020). However, ACDSS is more than just a new type of information technology, it is more akin to an agentic sociotechnical system with the potential to fundamentally re-ontologise the nature of healthcare (Floridi, 2017a). It is, therefore, unwise for policymakers to blithely endorse its adoption without first considering the implications (discussed in the next section) of this re-ontologising power (Adkins, 2017).


The Risks of Changing the Intrinsic Nature of Healthcare

Traditional CDSS is deductive. It is given specific inputs and (most often) provides binary outputs, for example, flagging the result of a diagnostic test as normal or abnormal. Clinicians act as the intermediary in this process, contextualising both the results and the potential treatment plan in a way that ensures clinical relevance and predictions are considered together (Cahan et al., 2019). ACDSS, however, use inductive reasoning and, as such releases diagnostic or prognostic predictions from the confines of existing medical knowledge (Cahan et al., 2019). This reduces the role of the clinician in the clinical encounter, potentially cutting them out of the loop completely, and so de-coupling the patient from the clinician. At the same time, the push towards using ACDSS for predictive and preventive purposes as part of a wider shift towards P4 medicine effectively reframes the NHS as a national screening service with unprecedented scale and scope (Green & Vogt, 2016). This can result in over-medicalisation where everyone cared for by the NHS is re-classified as a potentially ‘sick patient’ in sub-optimal health, placing a responsibility on them to take relevant action in order to improve their predicted outcomes (Green & Vogt, 2016). Combined these de-couplings (of the patient and the clinician, and the definition of wellness from the absence of disease) have the potential to change how characterise the ‘clinical domain’ (Chin-Yee & Upshur, 2019) - and thus delineate the boundaries of the NHS – in a way that will inevitably change: what counts as valid knowledge about the body; the nature of the therapeutic relationship; and the relationship between the body, culture, and society (Lock & Nguyen, 2018).


Knowledge about the body

Relying on ACDSS to make all decisions regarding diagnosis, prognosis, and treatment, could narrows the scope of the ‘clinical gaze’ so that it only looks upon, and thus only considers relevant, facets of a person’s health that can be recorded as ‘facts’ in an electronic health record (Holmes et al., 2006). This assumes that the presence or absence of specific SNOMED codes (standardised clinical terminology used by the NHS to record symptoms) can, for example, confirm or dismiss a specific diagnosis. In other words, relying on ACDSS assumes that only knowledge considered ‘valid’ from a positivistic perspective is relevant for diagnostic, prognostic or treatment purposes (Goldenberg, 2006), knowledge considered valid from a humanistic perspective – such as a person’s lived experience (Deeny & Steventon, 2015) or their subjective feeling of wellbeing (Molnár-Gábor, 2020) – is deemed irrelevant and ignorable.


The Therapeutic Relationship

In turn, this narrowing of what counts as ‘relevant’ in the clinical encounter changes the nature of the therapeutic relationship (based on the Hippocratic triangle) between patient, disease, and clinician (Antoniou et al., 2010; Askitopoulou & Vgontzas, 2018) by altering the ethics of care (Dalton-Brown, 2020). The currently accepted ethics of care, within the NHS, emphasises the importance of viewing patients as individuals with values and needs that should be taken into account in clinical decision-making (Morrell, 2006). This is the philosophical underpinning of shared decision-making, where patients are treated as equals in decisions regarding their treatment plan and their values are treated as, as important as empirical data (Barry & Edgman-Levitan, 2012). Approaching the therapeutic relationship from this perspective places patients at the centre and recognises that decisions about what ought to be done in any given clinical situation should not be based on empirical data alone (Tonelli, 1998). Shifting decision-making responsibility from clinician to ACDSS limits opportunities for patients, and their values, to be accounted for (Char et al., 2018) and so has the potential to transform the meaning of essential components of a patient-centric therapeutic model including trustworthiness, transparency, agency and responsibility (Braun et al., 2020). If not carefully controlled, such shifts could pose a threat to patient autonomy and dignity and herald a return to a paternalistic model of medical decision-making (Grote & Berens, 2020).


Transformative effects: Body, Culture, Society

Combined, these changes to accepted knowledge about the body and to the philosophical underpinning of the therapeutic relationship have the potential to transforms the concept of illness from a normative concept informed by personal and socio-cultural context into an ‘objectified technical problem’ that can only be solved by the application of technology and positivistic empiricism (Lock & Nguyen, 2018). This process decontextualises healthcare and makes opaque the moral assumptions underpinning policy decisions designed to push the model of care in one direction or another. For example, NHS policymakers are quick to frame the increased use of data and ACDSS in the delivery of care as ‘empowering’ for patients suggesting that the more that is known about their individual health, their body, and their mind, the more that can be done to help them help themselves achieve optimum health. Whilst these intentions might appear noble on the surface, they have also been revealed to be underpinned by an info-liberal philosophy which frames individual bodies as ‘enterprises’ that can be controlled and constantly improved – individuals are given the information they ‘need’ to understand their personalised ‘risk score’ (through the use of ACDSS) and are then expected to take on the responsibility for managing that risk by making rational choices (Catlaw & Sandberg, 2018). This assumes that health risks, and the actions that need to be taken to control it, are based on generalisable formulae that do not need to account for a person’s wider context (Molnár-Gábor, 2020). Ultimately, this ignores the influence of the ‘social determinants’ of health and increases the risk of individuals (and particularly groups being ‘blamed’ for their own ill-health if they are perceived to have not taken the appropriate actions to improve their health outcomes, even if due to their political, social or cultural circumstances taking these rationalised actions would have been impossible (Morley & Floridi, 2019). This is especially problematic when it is considered that the baseline against which people will be measured is likely to be based on highly-biased datasets (white, adult men are strongly over-represented in existing data) which may mean the standards the system begins to hold everyone to in order to avoid being blamed for their own health may be impossible to meet for under-represented groups, significantly increasing inequalities in care (Cheney-Lippold, 2017; Nordling, 2019).


Unless carefully managed this re-ontologising of healthcare could have an extremely negative impact on individual patients, on particular population groups, on trust in the institution of the NHS itself, and ultimately on the effectiveness of healthcare itself (Morley et al., 2020). This is concerning because it could mean that, as well wasting public funds investing in a technology that may fail to be adopted, or fail to deliver on its promises, the strategy policy push towards P4 medicine could also cause harm (James, 2014) and impede the NHS’s ability to operate in accordance with its key values: everyone counts; compassion; respect and dignity; improving lives; for patients; and a commitment to quality care (Department of Health and Social Care, 2015). With stakes this high, it is necessary to pose the question: should the entire attempt to create an algorithmically-enhanced NHS be abandoned? This is the topic of the next section.


A Poor Investment?

From this brief narrative summary it is possible to conclude that ACDSS presents the NHS with significant positive opportunities for change (Shaw et al., 2019). With access to longitudinal health records for all citizens in Britain, the NHS could play a leading role in pushing the development of ACDSS (and other AI systems for healthcare) forward (Thompson & Morgan, 2020). However, it seems as though confidence in the ability of the NHS to successfully capitalise on these opportunities is waning by the day (Maddox et al., 2019). This is not surprising, given that the institution has a poor track record of managing large scale technical infrastructure and digital transformation projects, and ACDSS is both considerably more complex and riskier when the potential ramifications of the re-ontologising effects of ACDSS are considered. It seems that building an ACDSS to evaluate, diagnose and treat patients in the ‘lab’ is the easiest part of the development, deployment and use pipeline for AI in healthcare (Beam & Kohane, 2018) and translating technical success into clinical impact remains a significant challenge (The Lancet, 2017).


Panch et al.(2019) suggest that this leaves the NHS facing a difficult choice: either significantly downgrade the enthusiasm regarding the potential of AI, or focus on the fundamentals and create the infrastructure necessary to realise it. Traditionally, as illustrated earlier with the previous failed examples, when faced with complex decisions such as these, the NHS has opted for the first (arguably easier) choice: to abandon ship. This is potentially the least reputationally damaging option and it would at least protect against further waste of public funds. However, it would also result in patients and the NHS incurring significant opportunities costs, and could pose a threat to the NHS’s commitment to improving lives. The benefits then of the other option, to focus on creating the necessary fundamental infrastructure (technical, regulatory, cultural, ethical and social, might – in this instance, outweigh the costs. Fortunately, from this perspective, it is a positive that adoption and use of ACDSS in the NHS has been so limited. This means that there is still the opportunity for policymakers to design the system within which ACDSS is developed, deployed and used, according to the pro-ethical requirements of different stakeholders across the NHS. If managed ‘correctly,’ this shifting of policymakers’ attention away from the hype generated by the media and back on the fundamental infraethics[2]of the NHS – could enable the NHS to capitalise on the duel advantage of ethical AI: maximising the opportunities whilst proactively mitigating the risks (Floridi et al., 2018). It is this possibility that motivates this thesis. Thus, the motivating question becomes:


MQ: How can the NHS be enabled to capitalise on the dual advantage of ethical ACDSS?


And the aim is:


Aim: To help improve the system within which the NHS develops, deploys, and uses ACDSS.


One clear way of responding to this motivation with the hope of achieving this aim, is to adopt the logic of design as a conceptual logic of information (henceforth: logic of design).

As explained by Floridi (2017), the logic of design is a poietic science. As such, it does not seek to understand the existing system (in this case the NHS’s existing core sociocultural, technical, regulatory and ethical infrastructure) and using this understanding to develop a model that can explain the past or the present. Instead, the logic of design, seeks to develop the ideal model for a system (in this case the ideal core infrastructure) and use policy to realise the system according to this model – or at least as close to it as possible as it is a logic of necessary but not sufficient conditions. Exactly how this can be achieved in this context is discussed later in the methodology section. For now, it suffices to say that it involves two elements: the first is identifying the requirements for the ‘ideal’ model from which policymakers should work; and the second is a series of policy recommendations that can be used to help guide policymakers realise the system for the development, deployment and use of AI that will best help them to deliver the vision outlined in the above sections.


This the overarching research question for my thesis is:


RQ: What are the design requirements for the successful development, deployment, and use of ACDSS in the NHS?


Where successful is defined as safe, effective, and ethical (Harmon & Kale, 2015)


References

  1. Abidi, S. S. R., & Abidi, S. R. (2019). Intelligent health data analytics: A convergence of artificial intelligence and big data. Healthcare Management Forum, 32(4), 178–182. https://doi.org/10.1177/0840470419846134

  2. Adkins, D. E. (2017). Machine Learning and Electronic Health Records: A Paradigm Shift. The American Journal of Psychiatry, 174(2), 93–94. https://doi.org/10.1176/appi.ajp.2016.16101169

  3. Alderwick, H., & Dixon, J. (2019). The NHS long term plan. BMJ, l84. https://doi.org/10.1136/bmj.l84

  4. Antoniou, S. A., Antoniou, G. A., Granderath, F. A., Mavroforou, A., Giannoukas, A. D., & Antoniou, A. I. (2010). Reflections of the Hippocratic Oath in modern medicine. World Journal of Surgery, 34(12), 3075–3079. Scopus. https://doi.org/10.1007/s00268-010-0604-3

  5. Askitopoulou, H., & Vgontzas, A. N. (2018). The relevance of the Hippocratic Oath to the ethical and moral values of contemporary medicine. Part II: interpretation of the Hippocratic Oath—Today’s perspective. European Spine Journal, 27(7), 1491–1500. Scopus. https://doi.org/10.1007/s00586-018-5615-z

  6. Balicer, R. D., & Cohen-Stavi, C. (2020). Advancing Healthcare Through Data-Driven Medicine and Artificial Intelligence. In B. Nordlinger, C. Villani, & D. Rus (Eds.), Healthcare and Artificial Intelligence (pp. 9–15). Springer International Publishing. https://doi.org/10.1007/978-3-030-32161-1_2

  7. Ballantyne, A., & Schaefer, G. O. (2018). Consent and the ethical duty to participate in health data research. Journal of Medical Ethics, 44(6), 392–396. https://doi.org/10.1136/medethics-2017-104550

  8. Banerjee, A., Drumright, L. N., & Mitchell, A. R. J. (2018). Can the NHS be a learning healthcare system in the age of digital technology? BMJ Evidence-Based Medicine, 23(5), 161–164. https://doi.org/10.1136/bmjebm-2018-110953

  9. Barry, M. J., & Edgman-Levitan, S. (2012). Shared decision making—The pinnacle of patient-centered care. New England Journal of Medicine, 366(9), 780–781.

  10. Beam, A. L., & Kohane, I. S. (2018). Big Data and Machine Learning in Health Care. JAMA, 319(13), 1317. https://doi.org/10.1001/jama.2017.18391

  11. Berwick, D. M., Nolan, T. W., & Whittington, J. (2008). The triple aim: Care, health, and cost. Health Affairs (Project Hope), 27(3), 759. https://doi.org/10.1377/hlthaff.27.3.759

  12. Blasimme, A., & Vayena, E. (2019). The Ethics of AI in Biomedical Research, Patient Care and Public Health. SSRN Electronic Journal. https://doi.org/10.2139/ssrn.3368756

  13. Blease, C., Kaptchuk, T. J., Bernstein, M. H., Mandl, K. D., Halamka, J. D., & DesRoches, C. M. (2019). Artificial Intelligence and the Future of Primary Care: Exploratory Qualitative Study of UK General Practitioners’ Views. Journal of Medical Internet Research, 21(3), e12802. https://doi.org/10.2196/12802

  14. boyd, danah, & Crawford, K. (2012). CRITICAL QUESTIONS FOR BIG DATA: Provocations for a cultural, technological, and scholarly phenomenon. Information, Communication & Society, 15(5), 662–679. https://doi.org/10.1080/1369118X.2012.678878

  15. Braun, M., Hummel, P., Beck, S., & Dabrock, P. (2020). Primer on an ethics of AI-based decision support systems in the clinic. Journal of Medical Ethics, medethics-2019-105860. https://doi.org/10.1136/medethics-2019-105860

  16. Buch, V. H., Ahmed, I., & Maruthappu, M. (2018). Artificial intelligence in medicine: Current trends and future possibilities. British Journal of General Practice, 68(668), 143–144. https://doi.org/10.3399/bjgp18X695213

  17. Burrell, J. (2016). How the machine ‘thinks’: Understanding opacity in machine learning algorithms. Big Data & Society, 3(1), 205395171562251. https://doi.org/10.1177/2053951715622512

  18. Cabitza, F., Rasoini, R., & Gensini, G. F. (2017). Unintended Consequences of Machine Learning in Medicine. JAMA, 318(6), 517. https://doi.org/10.1001/jama.2017.7797

  19. Cahan, E. M., Hernandez-Boussard, T., Thadaney-Israni, S., & Rubin, D. L. (2019). Putting the data before the algorithm in big data addressing personalized healthcare. Npj Digital Medicine, 2(1), 78. https://doi.org/10.1038/s41746-019-0157-2

  20. Carter, P., Laurie, G. T., & Dixon-Woods, M. (2015). The social licence for research: Why <em>care.data</em> ran into trouble. Journal of Medical Ethics, 41(5), 404. https://doi.org/10.1136/medethics-2014-102374

  21. Catlaw, T. J., & Sandberg, B. (2018). The Quantified Self and the Evolution of Neoliberal Self-Government: An Exploratory Qualitative Study. Administrative Theory & Praxis, 40(1), 3–22. https://doi.org/10.1080/10841806.2017.1420743

  22. Challen, R., Denny, J., Pitt, M., Gompels, L., Edwards, T., & Tsaneva-Atanasova, K. (2019). Artificial intelligence, bias and clinical safety. BMJ Quality and Safety, 28(3), 231–237. Scopus. https://doi.org/10.1136/bmjqs-2018-008370

  23. Char, D. S., Shah, N. H., & Magnus, D. (2018). Implementing Machine Learning in Health Care—Addressing Ethical Challenges. The New England Journal of Medicine, 378(11), 981–983. https://doi.org/10.1056/NEJMp1714229

  24. Cheney-Lippold, J. (2017). We are data: Algorithms and the making of our digital selves. New York University Press.

  25. Chin-Yee, B., & Upshur, R. (2019). Three Problems with Big Data and Artificial Intelligence in Medicine. Perspectives in Biology and Medicine, 62(2), 237–256. https://doi.org/10.1353/pbm.2019.0012

  26. Clarke, R. (2019). Why the world wants controls over Artificial Intelligence. Computer Law and Security Review, 35(4), 423–433. Scopus. https://doi.org/10.1016/j.clsr.2019.04.006

  27. Cohen, I. G., Evgeniou, T., Gerke, S., & Minssen, T. (2020). The European artificial intelligence strategy: Implications and challenges for digital health. The Lancet Digital Health, 2(7), e376–e379. Scopus. https://doi.org/10.1016/S2589-7500(20)30112-6

  28. Dalton-Brown, S. (2020). The Ethics of Medical AI and the Physician-Patient Relationship. Cambridge Quarterly of Healthcare Ethics, 29(1), 115–121. https://doi.org/10.1017/S0963180119000847

  29. Davenport, T., & Kalakota, R. (2019). The potential for artificial intelligence in healthcare. Future Healthcare Journal, 6(2), 94–98. https://doi.org/10.7861/futurehosp.6-2-94

  30. Deeny, S. R., & Steventon, A. (2015). Making sense of the shadows: Priorities for creating a learning healthcare system based on routinely collected data. BMJ Qual Saf, 24(8), 505–515.

  31. Department for Business, Energy & Industrial Strategy. (2018). Artifical Intelligence and Data Grand Challenge. Mission: Use data, Artificial Intelligence and innovation to transform the prevention, early diagnosis and treatment of chronic diseases by 2030 [Gov.uk]. https://www.gov.uk/government/publications/industrial-strategy-the-grand-challenges/missions#artificial-intelligence-and-data

  32. Department of Health and Social Care. (2015). The NHS Constitution for England. https://www.gov.uk/government/publications/the-nhs-constitution-for-england/the-nhs-constitution-for-england

  33. Dhindsa, K., Bhandari, M., & Sonnadara, R. R. (2018). What’s holding up the big data revolution in healthcare? BMJ, k5357. https://doi.org/10.1136/bmj.k5357

  34. DHSC. (2011, September 11). Dismantling the NHS National Programme for IT. https://www.gov.uk/government/news/dismantling-the-nhs-national-programme-for-it

  35. Dixon-Woods, M., Amalberti, R., Goodman, S., Bergman, B., & Glasziou, P. (2011). Problems and promises of innovation: Why healthcare needs to rethink its love/hate relationship with the new. BMJ Quality & Safety, 20(Suppl 1), i47–i51. https://doi.org/10.1136/bmjqs.2010.046227

  36. Eberhardt, J., Bilchik, A., & Stojadinovic, A. (2012). Clinical decision support systems: Potential with pitfalls. Journal of Surgical Oncology, 105(5), 502–510. Scopus. https://doi.org/10.1002/jso.23053

  37. Findley, J., Woods, A., Robertson, C., & Slepian, M. (2020). Keeping the Patient at the Center of Machine Learning in Healthcare. The American Journal of Bioethics, 20(11), 54–56. https://doi.org/10.1080/15265161.2020.1820100

  38. Floridi, L. (2017a). Digital’s Cleaving Power and Its Consequences. Philosophy & Technology, 30(2), 123–129. https://doi.org/10.1007/s13347-017-0259-1

  39. Floridi, L. (2017b). Infraethics–on the Conditions of Possibility of Morality. Philosophy & Technology, 30(4), 391–394. https://doi.org/10.1007/s13347-017-0291-1

  40. Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., Luetge, C., Madelin, R., Pagallo, U., Rossi, F., Schafer, B., Valcke, P., & Vayena, E. (2018). AI4People—An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations. Minds and Machines, 28(4), 689–707. https://doi.org/10.1007/s11023-018-9482-5

  41. Goldenberg, M. J. (2006). On evidence and evidence-based medicine: Lessons from the philosophy of science. Social Science & Medicine, 62(11), 2621–2632.

  42. Goodman, K. W. (2020). Ethics in Health Informatics. Yearbook of Medical Informatics, 29(01), 026–031. https://doi.org/10.1055/s-0040-1701966

  43. Great Britain & National Audit Office. (2011). Department of Health: The national programme for IT in the NHS: an update on the delivery of detailed care records systems. TSO.

  44. Green, S., & Vogt, H. (2016). Personalizing Medicine: Disease prevention in silico and in socio. Humana Mente, 30, 105–145.

  45. Greenhalgh, T. (2013). Five biases of new technologies. British Journal of General Practice, 63(613), 425–425. https://doi.org/10.3399/bjgp13X670741

  46. Greenhalgh, T., & Keen, J. (2013). England’s national programme for IT. BMJ, 346(jun28 2), f4130–f4130. https://doi.org/10.1136/bmj.f4130

  47. Greenhalgh, T., & Papoutsi, C. (2019). Spreading and scaling up innovation and improvement. BMJ, l2068. https://doi.org/10.1136/bmj.l2068

  48. Grote, T., & Berens, P. (2020). On the ethics of algorithmic decision-making in healthcare. Journal of Medical Ethics, 46(3), 205–211. https://doi.org/10.1136/medethics-2019-105586

  49. Harmon, S. H. E., & Kale, D. (2015). Regulating in developing countries: Multiple roles for medical research and products regulation in Argentina and India. Technology in Society, 43, 10–22. https://doi.org/10.1016/j.techsoc.2015.07.002

  50. Hays, R., & Daker-White, G. (2015). The care.data consensus? A qualitative analysis of opinions expressed on Twitter Health policies, systems and management in high-income countries. BMC Public Health, 15(1). https://doi.org/10.1186/s12889-015-2180-9

  51. He, J., Baxter, S. L., Xu, J., Xu, J., Zhou, X., & Zhang, K. (2019). The practical implementation of artificial intelligence technologies in medicine. Nature Medicine, 25(1), 30–36. https://doi.org/10.1038/s41591-018-0307-0

  52. Heckman, G. A., Hirdes, J. P., & McKelvie, R. S. (2020). The Role of Physicians in the Era of Big Data. Canadian Journal of Cardiology, 36(1), 19–21. https://doi.org/10.1016/j.cjca.2019.09.018

  53. Heitmueller, A., Henderson, S., Warburton, W., Elmagarmid, A., Pentland, A., & Darzi, A. (2014). Developing public policy to advance the use of big data in health care. Health Affairs, 33(9), 1523–1530. https://doi.org/10.1377/hlthaff.2014.0771

  54. Hollis, K. F., Soualmia, L. F., & Séroussi, B. (2019). Artificial Intelligence in Health Informatics: Hype or Reality? Yearbook of Medical Informatics, 28(01), 003–004. https://doi.org/10.1055/s-0039-1677951

  55. Holmes, D., Murray, S. J., Perron, A., & Rail, G. (2006). Deconstructing the evidence-based discourse in health sciences: Truth, power and fascism. International Journal of Evidence-Based Healthcare, 4(3), 180–186.

  56. ICO. (2017, July 3). Royal Free—Google DeepMind trial failed to comply with data protection law. https://ico.org.uk/about-the-ico/news-and-events/news-and-blogs/2017/07/royal-free-google-deepmind-trial-failed-to-comply-with-data-protection-law/

  57. James, J. E. (2014). Personalised medicine, disease prevention, and the inverse care law: More harm than benefit? European Journal of Epidemiology, 29(6), 383–390. https://doi.org/10.1007/s10654-014-9898-z

  58. Janes, C. R., & Corbett, K. K. (2009). Anthropology and Global Health. Annual Review of Anthropology, 38(1), 167–183. https://doi.org/10.1146/annurev-anthro-091908-164314

  59. Jiang, F., Jiang, Y., Zhi, H., Dong, Y., Li, H., Ma, S., Wang, Y., Dong, Q., Shen, H., & Wang, Y. (2017). Artificial intelligence in healthcare: Past, present and future. Stroke and Vascular Neurology, 2(4), 230–243. https://doi.org/10.1136/svn-2017-000101

  60. Justinia, T. (2017). The UK’s National Programme for IT: Why was it dismantled? Health Services Management Research, 30(1), 2–9. https://doi.org/10.1177/0951484816662492

  61. Kelly, C. J., & Young, A. J. (2017). Promoting innovation in healthcare. Future Hospital Journal, 4(2), 121–125. https://doi.org/10.7861/futurehosp.4-2-121

  62. Laurie, G., Ainsworth, J., Cunningham, J., Dobbs, C., Jones, K. H., Kalra, D., Lea, N. C., & Sethi, N. (2015). On moving targets and magic bullets: Can the UK lead the way with responsible data linkage for health research? International Journal of Medical Informatics, 84(11), 933–940. https://doi.org/10.1016/j.ijmedinf.2015.08.011

  63. Limb, M. (2016). Controversial database of medical records is scrapped over security concerns. BMJ, i3804. https://doi.org/10.1136/bmj.i3804

  64. Lock, M. M., & Nguyen, V.-K. (2018). An anthropology of biomedicine (Second Edition). Wiley Blackwell.

  65. Maddox, T. M., Rumsfeld, J. S., & Payne, P. R. O. (2019). Questions for Artificial Intelligence in Health Care. JAMA, 321(1), 31. https://doi.org/10.1001/jama.2018.18932

  66. Magrabi, F., Ammenwerth, E., McNair, J. B., De Keizer, N. F., Hyppönen, H., Nykänen, P., Rigby, M., Scott, P. J., Vehko, T., Wong, Z. S.-Y., & Georgiou, A. (2019). Artificial Intelligence in Clinical Decision Support: Challenges for Evaluating AI and Practical Implications. Yearbook of Medical Informatics, 28(1), 128–134. Scopus. https://doi.org/10.1055/s-0039-1677903

  67. Martin, G., Arora, S., Shah, N., King, D., & Darzi, A. (2019). A regulatory perspective on the influence of health information technology on organisational quality and safety in England. Health Informatics Journal. https://doi.org/10.1177/1460458219854602

  68. McClenahan, J. (2000). The value of information management and technology to health care professionals. Journal of Clinical Excellence, 2(2), 93–98.

  69. McMillan, B., Eastham, R., Brown, B., Fitton, R., & Dickinson, D. (2018). Primary Care Patient Records in the United Kingdom: Past, Present, and Future Research Priorities. Journal of Medical Internet Research, 20(12), e11293. https://doi.org/10.2196/11293

  70. Mészáros, J., & Ho, C. (2018). Big Data and Scientific Research: The Secondary Use of Personal Data under the Research Exemption in the GDPR. Hungarian Journal of Legal Studies, 59(4), 403–419. https://doi.org/10.1556/2052.2018.59.4.5

  71. Miotto, R., Wang, F., Wang, S., Jiang, X., & Dudley, J. T. (2018). Deep learning for healthcare: Review, opportunities and challenges. Briefings in Bioinformatics, 19(6), 1236–1246. https://doi.org/10.1093/bib/bbx044

  72. Mittelstadt, B., & Floridi, L. (2016). The Ethics of Big Data: Current and Foreseeable Issues in Biomedical Contexts. Science and Engineering Ethics, 22(2), 303–341. https://doi.org/10.1007/s11948-015-9652-2

  73. Molnár-Gábor, F. (2020). Artificial Intelligence in Healthcare: Doctors, Patients and Liabilities. In T. Wischmeyer & T. Rademacher (Eds.), Regulating Artificial Intelligence (pp. 337–360). Springer International Publishing. https://doi.org/10.1007/978-3-030-32361-5_15

  74. Morley, J., & Floridi, L. (2019). The Limits of Empowerment: How to Reframe the Role of mHealth Tools in the Healthcare Ecosystem. Science and Engineering Ethics. https://doi.org/10.1007/s11948-019-00115-1

  75. Morley, J., Machado, C. C. V., Burr, C., Cowls, J., Joshi, I., Taddeo, M., & Floridi, L. (2020). The ethics of AI in health care: A mapping review. Social Science & Medicine, 260, 113172. https://doi.org/10.1016/j.socscimed.2020.113172

  76. Morrell, K. (2006). Governance, Ethics and the National Health Service. Public Money & Management, 26(1), 55–62. https://doi.org/10.1111/j.1467-9302.2005.00501.x

  77. NHS England. (2013). Care.Data. https://www.england.nhs.uk/2013/10/care-data/

  78. NHS England. (2019). The NHS Long Term Plan. https://www.longtermplan.nhs.uk/wp-content/uploads/2019/01/nhs-long-term-plan.pdf

  79. Nordling, L. (2019). A fairer way forward for AI in health care. Nature, 573(7775), S103–S105. https://doi.org/10.1038/d41586-019-02872-2

  80. Norgeot, B., Glicksberg, B. S., & Butte, A. J. (2019). A call for deep-learning healthcare. Nature Medicine, 25(1), 14–15. https://doi.org/10.1038/s41591-018-0320-3

  81. Panch, T., Mattie, H., & Celi, L. A. (2019). The “inconvenient truth” about AI in healthcare. Npj Digital Medicine, 2(1), 77. https://doi.org/10.1038/s41746-019-0155-4

  82. Papanicolas, I., Mossialos, E., Gundersen, A., Woskie, L., & Jha, A. K. (2019). Performance of UK National Health Service compared with other high income countries: Observational study. BMJ, l6326. https://doi.org/10.1136/bmj.l6326

  83. Powles, J., & Hodson, H. (2017). Google DeepMind and healthcare in an age of algorithms. Health and Technology, 7(4), 351–367. https://doi.org/10.1007/s12553-017-0179-1

  84. Presser, L., Hruskova, M., Rowbottom, H., & Kancier, J. (2015). Care.data and access to UK health records: Patient privacy and public trust. Technology Science.

  85. Price, W. N. (2018). Big data and black-box medical algorithms. Science Translational Medicine, 10(471), eaao5333. https://doi.org/10.1126/scitranslmed.aao5333

  86. Price, W. N., & Cohen, I. G. (2019). Privacy in the age of medical big data. Nature Medicine, 25(1), 37–43. https://doi.org/10.1038/s41591-018-0272-7

  87. Reddy, S., Fox, J., & Purohit, M. P. (2019). Artificial intelligence-enabled healthcare delivery. Journal of the Royal Society of Medicine, 112(1), 22–28. https://doi.org/10.1177/0141076818815510

  88. Reed, C. (2018). How should we regulate artificial intelligence? Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 376(2128), 20170360. https://doi.org/10.1098/rsta.2017.0360

  89. Reisman, Y. (1996). Computer-based clinical decision aids. A review of methods and assessment of systems. Medical Informatics, 21(3), 179–197. Scopus. https://doi.org/10.3109/14639239609025356

  90. Riso, B., Tupasela, A., Vears, D. F., Felzmann, H., Cockbain, J., Loi, M., Kongsholm, N. C. H., Zullo, S., & Rakic, V. (2017). Ethical sharing of health data in online platforms – which values should be considered? Life Sciences, Society and Policy, 13(1). https://doi.org/10.1186/s40504-017-0060-z

  91. Rivett, G. (1998). From cradle to grave: Fifty years of the NHS (repr). King’s Fund.

  92. Rumbold, J. M. M., & Pierscionek, B. K. (2017). A critique of the regulation of data science in healthcare research in the European Union. BMC Medical Ethics, 18(1), 27. https://doi.org/10.1186/s12910-017-0184-y

  93. Sackett, D. L., Rosenberg, W. M. C., Gray, J. A. M., Haynes, R. B., & Richardson, W. S. (1996). Evidence based medicine: What it is and what it isn’t. BMJ, 312(7023), 71–72. https://doi.org/10.1136/bmj.312.7023.71

  94. Seneviratne, M. G., Shah, N. H., & Chu, L. (2020). Bridging the implementation gap of machine learning in healthcare. BMJ Innovations, 6(2), 45–47. https://doi.org/10.1136/bmjinnov-2019-000359

  95. Shah, H. (2017). The DeepMind debacle demands dialogue on data. Nature, 547(7663), 259. https://doi.org/10.1038/547259a

  96. Sharpe, V. A. (1997). Why ‘do no harm’? Theoretical Medicine and Bioethics, 18(1–2), 197–215. https://doi.org/10.1023/A:1005757606106

  97. Shaw, J., Rudzicz, F., Jamieson, T., & Goldfarb, A. (2019). Artificial Intelligence and the Implementation Challenge. Journal of Medical Internet Research, 21(7), e13659. https://doi.org/10.2196/13659

  98. Sood, H. S., & McNeil, K. (2017). How is health information technology changing the way we deliver NHS hospital care? Future Hospital Journal, 4(2), 117–120. https://doi.org/10.7861/futurehosp.4-2-117

  99. Sophie Castle-Clarke. (2018). What will new technology mean for the NHS and its patients? https://apo.org.au/node/205731

  100. Stevenson, F. (2015). The use of electronic patient records for medical research: Conflicts and contradictions. BMC Health Services Research, 15(1). Scopus. https://doi.org/10.1186/s12913-015-0783-6

  101. Steventon, A., Deeny, S. R., Keith, J., & Wolters, A. T. (2019). New AI laboratory for the NHS. BMJ, l5434. https://doi.org/10.1136/bmj.l5434

  102. Stockdale, J., Cassell, J., & Ford, E. (2019). “Giving something back”: A systematic review and ethical enquiry into public views on the use of patient data for research in the United Kingdom and the Republic of Ireland. Wellcome Open Research, 3, 6. https://doi.org/10.12688/wellcomeopenres.13531.2

  103. The Kings Fund. (2021, March 1). NHS trusts in deficit. https://www.kingsfund.org.uk/projects/nhs-in-a-nutshell/trusts-deficit

  104. The, L. (2017). Artificial intelligence in health care: Within touching distance. The Lancet, 390(10114), 2739. https://doi.org/10.1016/S0140-6736(17)31540-4

  105. The National Data Guardian. (2020). The Caldicott Principles. https://www.gov.uk/government/publications/the-caldicott-principles

  106. the Precise4Q consortium, Amann, J., Blasimme, A., Vayena, E., Frey, D., & Madai, V. I. (2020). Explainability for artificial intelligence in healthcare: A multidisciplinary perspective. BMC Medical Informatics and Decision Making, 20(1), 310. https://doi.org/10.1186/s12911-020-01332-6

  107. Thesmar, D., Sraer, D., Pinheiro, L., Dadson, N., Veliche, R., & Greenberg, P. (2019). Combining the Power of Artificial Intelligence with the Richness of Healthcare Claims Data: Opportunities and Challenges. PharmacoEconomics, 37(6), 745–752. https://doi.org/10.1007/s40273-019-00777-6

  108. Thompson, C. L., & Morgan, H. M. (2020). Ethical barriers to artificial intelligence in the national health service, United Kingdom of Great Britain and Northern Ireland. Bulletin of the World Health Organization, 98(4), 293–295. https://doi.org/10.2471/BLT.19.237230

  109. Tonelli, M. (1998). The philosophical limits of evidence-based medicine.

  110. Topol, E. (2019). Preparing the healthcare workforce to deliver the digital future. NHS Health Education England. https://topol.hee.nhs.uk/

  111. van Staa, T.-P., Goldacre, B., Buchan, I., & Smeeth, L. (2016). Big health data: The need to earn public trust. BMJ, i3636. https://doi.org/10.1136/bmj.i3636

  112. Vayena, E., Blasimme, A., & Cohen, I. G. (2018). Machine learning in medicine: Addressing ethical challenges. PLoS Medicine, 15(11), e1002689. https://doi.org/10.1371/journal.pmed.1002689

  113. Vollmer, S., Mateen, B. A., Bohner, G., Király, F. J., Ghani, R., Jonsson, P., Cumbers, S., Jonas, A., McAllister, K. S. L., Myles, P., Grainger, D., Birse, M., Branson, R., Moons, K. G. M., Collins, G. S., Ioannidis, J. P. A., Holmes, C., & Hemingway, H. (2020). Machine learning and artificial intelligence research for patient benefit: 20 critical questions on transparency, replicability, ethics, and effectiveness. BMJ, l6927. https://doi.org/10.1136/bmj.l6927

  114. Wachter, R. (2016). Making IT Work: Harnessing the Power of Health Information Technology to Improve Care in England. Department of Health and Social Care. https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/550866/Wachter_Review_Accessible.pdf

  115. Waring, J., Lindvall, C., & Umeton, R. (2020). Automated machine learning: Review of the state-of-the-art and opportunities for healthcare. Artificial Intelligence in Medicine, 104, 101822. https://doi.org/10.1016/j.artmed.2020.101822

  116. Xiao, C., Choi, E., & Sun, J. (2018). Opportunities and challenges in developing deep learning models using electronic health records data: A systematic review. Journal of the American Medical Informatics Association, 25(10), 1419–1428. https://doi.org/10.1093/jamia/ocy068

  117. Young, T. (2017). Can innovation help us deliver an NHS for the 21st century? The British Journal of General Practice: The Journal of the Royal College of General Practitioners, 67(657), 152–153. https://doi.org/10.3399/bjgp17X690053


62 views0 comments
Post: Blog2_Post
bottom of page