top of page
  • Jessica Morley

What do we know about making CDSS work in the NHS?

This is a short version of the literature review in my PhD thesis, explaining what is already known about how to successfully develop, deploy, and use algorithmic-CDSS in the NHS.

The question, ‘what are the design requirements for the successful development, deployment and use of ACDSS in the NHS’ is a multidisciplinary question. The literature reflects this with attempts to answer parts of the question rooted in disciplines as wide ranging as: social psychology, anthropology and sociology, philosophy, organisational sociology, information systems (Shaw et al., 2017), medical sociology, communication studies, marketing and economics, development studies, health promotion, complexity studies, and more (Greenhalgh et al., 2004). What follows is a brief overview of this wide-ranging literature, demonstrating how different scholars from different disciplines have approached different parts of the question from different angles and at different levels of abstraction (LoA).

Achieving change in the NHS

Over its more than 70-year lifespan the NHS has been subject to almost continuous top-down pressure to adapt to changes in the demographic make-up of the population it serves, to political and cultural changes, to changes in medicine, and to technological change (Ashburner et al., 1996). At times this pressure is felt implicitly through changes in NHS contracts and financial incentives, at other times it is felt through explicit attempts by the Government to achieve ‘profound institutional change’ involving new organisational forms, new governance mechanisms, and new population boundaries. It is no surprise, therefore, that much of the published literature operating at the highest level of abstraction tries to analyse examples of successful and failed transformation initiatives in order to identify the mechanisms that are most effective at producing the intended outcomes (Best et al., 2012). In essence, this literature – which primarily draws from organisational sociology and social psychology – frames achieving large-scale change in the NHS as a management problem and attempts to identify the macro, meso and micro factors that facilitate or hinder the institution’s ability to adapt to changes in strategic intent (Pettigrew et al., 1988).

To add specificity to this description Asthana et al., (2019) list the macro, meso and micro factors influencing the NHS’s ability to adopt new innovations as: incentives and regulatory requirements; commissioning targets and financial pressures; and the impact on the values, priorities and routines of staff. Going into more detail Allock et al. (2015) identify seven factors that are essential if change is to be successfully achieved at any level of the NHS. These factors include committed and respected leadership; a culture hospitable to, and supportive of, change; and the ability of the workforce to identify and solve problems. As Macfarlane et al. (2013) highlight, whilst the NHS is often referred to as a ‘national institution’ operating as a homogenous brand, it is in fact a far more heterogeneous collection of different organisations exhibiting a growing degree of divergence. Thus, it is to be expected that Allock et al. (2015) also find the presence of the seven factors to be inconsistent across the NHS as a whole – a fact that makes efforts to improve services and make changes considerably more difficult than it would be otherwise. Finally, Scott (2003) and Ferlie et al. (2012) add a critical perspective to this variation in ability to successfully adapt to, or implement, change across the NHS. Respectively attributing this to ‘cultural lag’ where – in some parts of the NHS – there is a considerable dissonance between the prevailing culture of the organisation and the culture of wider society, and different in organisational power relations.

Adoption, scale-up and spread

Moving to the next LoA the next body of relevant published literature spreads out from Rogers’ original (1962) diffusion of innovation theory which attempted to identify the process by which (1) an innovation (2) is communicated through certain channels (3) over time (4) among the members of a social system (Rogers, 2003), and attempts to identify and understand the factors driving the successful (or unsuccessful) adoption and diffusion of information technologies in healthcare settings (Ljubicic et al., 2020). It is a sprawling body of literature, covering everything from comprehensive theories of social practice such as Latour’s Actor Network Theory (Cresswell et al., 2010), Giddens’ structure theory (Giddens, 1984), social construction of technology theory (Klein & Kleinman, 2002; Pinch & Bijker, 1984), and normalisation process theory (J. Shaw et al., 2017), through to hyper-specific metrics designed to predict the likelihood of a specific NHS organisation adopting a specific innovation, such as ‘The Innovation Readiness Score’ (Benson, 2019). Although very different in approach, all these theories and metrics attempt to answer the question: ‘how do we begin to theorise what happens at macro, meso, and micro levels when government tries to ‘modernise’ a health service with the help of big IT?’ (Greenhalgh & Stones, 2010). This perhaps explains why it is this body of literature that the NHS itself has shown greatest interest in, with the UK Department of Health commissioning a systematic review on the topic in 2002 as part of its NHS Service Delivery and Organisation Programme (Greenhalgh et al., 2004).

The range of potential contributing factors that can be extracted from these theories of social practice and individual methods is vast, including: digital literacy, usefulness, ease of use, motivation to change behaviour, anticipated benefits, perceived effort, social norms, and user optimism in technology. To try to reduce this complexity, scholars have also tried to combine these different factors into models that can be more readily applied to specific cases. These models include the Technology Acceptance Model (TAM) (F. D. Davis, 1989), extended TAM (Venkatesh & Davis, 2000), the unified theory of acceptance and use of technology (Venkatesh et al., 2003), and conceptual Population-Intervention-Environment Transfer Model of Transferability (Schloemer & Schröder-Bäck, 2018). However, of all these available models the most directly relevant is the non-adoption, abandonment, scale-up, spread, and sustainability (NASSS) framework developed by Greenhalgh et al. (2017) to explain why certain technology programmes fail in the NHS whilst others succeed by focusing on the degree of complexity exhibited in 7 domains: the condition or illness, the technology, the value proposition, the adopter system, the organisation, the wider (institutional and societal) context, and the interaction and mutual adaptation between all these domains over time (Greenhalgh et al., 2018).

Despite the potential explanatory and predictive power of these, and other not-listed models, there is very little evidence in the literature of how these theories might be applied to inform the development of more effective policy and strategy from the beginning, with most theories only being ‘tested’ via retroactive case study analysis. For example, Wainwright & Waring, (2007) apply Rogers’ diffusion of innovation theory to four different case studies looking at the uptake of new information systems within NHS General Practices, and Johnson et al. (2014) use the Technology Acceptance Model to identify the technological, organisational and behavioural factors affecting clinician acceptance of traditional CDSS. In general, it is felt that further work is needed to refine and test these different models before they can be used to develop predictive hypotheses and tested – particularly in the context of complex information technologies, such as ACDSS (Ward, 2013). This evident lack of theory-testing somewhat limits the utility of these models and theories for this thesis, at least in their entirety, their component concepts will remain highly relevant. However, there is one key takeaway and that is: the importance of acknowledging and accounting for the complexity derived from the interactions between the different ‘requirements’ extracted at the four different levels listed above (Greenhalgh, 2018). If complexity is not acknowledged or is sub-optimally handled by policymakers, there is almost no chance of the successful development, deployment and use of ACDSS in the NHS.

ACDSS: methods and applications

Arriving at the meso, and more technical, LoA the literature charts the development of ACDSS overtime from traditional logic-based models in the late 1950s and early 1960s (Reisman, 1996) through the emergence of ‘evidence-adaptive’ CDSS designed to improve adherence to official guidelines in the 2000s (De Clercq et al., 2004; Sim et al., 2001), up until the use of machine learning techniques such as random forest and neural networks today (Harerimana et al., 2018; Levy-Fix et al., 2019). Mostly the published literature in this domain serves as useful technical background and serves as a reminder that the ‘requirements’ for success will depend on the type of model used in the CDSS. For example, for traditional presentational CDSS speed of response and ease of use matter the most for adoption and acceptability (Bates et al., 2003), whereas for inferential, machine learning driven, ACDSS factors that influence clinician and patient trust are most important (Vourgidis et al., 2019), for example explainability, reliability and validity (Shortliffe & Sepúlveda, 2018).

The Ethics of AI for healthcare

Staying at the meso LoA but moving from literature grounded in technical disciplines to literature grounded in philosophy, there has recently been a burgeoning of publications focusing on the ethical implications of using AI (including in ACDSS) in healthcare. This is to be expected given medicine’s overall mandate of ‘do no harm’ (Manchikanti & Hirsch, 2015) and the proliferation of normative documents (Schiff et al., 2020) addressing the ethics of AI including ethics codes, guidelines and frameworks from private companies and public bodies alike (see for example (Greene et al., 2019; Hagendorff, 2020; Jobin et al., 2019; Terzis, 2020)). Beyond such analyses of high-level ethics principles (typically beneficence, non-maleficence, autonomy, justice and explainability (Floridi & Cowls, 2019)), most of this sub-section of the published literature consists of theoretical discussions about the potential impact of AI on key tenets of care. Specifically, by identifying ethical challenges related to data, process, and management (Xafis & Labude, 2019) this literature highlights the implications for consent (Andorno, 2004; Brill et al., 2019; Findley et al., 2020), safety (especially the risks of misdiagnosis to those under-represented in existing data sets due to bias) (see for example Gianfrancesco et al., 2018; Hague, 2019), empathy (see for example Kerasidou, 2020) and, of course, privacy (see for example Abouelmehdi et al., 2017; Bartoletti, 2019; Price & Cohen, 2019).

Alongside these theoretical discussions of ethical implications and ethical principles, there is a growing literature critiquing the purely principle-based approach to the ethics of AI in general (Mittelstadt, 2019; Morley et al., 2019), and especially to the ethics of AI in healthcare given its safety critical nature (Morley & Floridi, 2020). Criticism of the field of AI ethics in general falls into one of three categories: (1) the sheer number of normative documents available allows developers of AI to ‘shop around’ for guideline that is most convenient to commit to for ethics washing purposes (Floridi, 2019); (2) principles are difficult to operationalise in practice with little guidance available to AI developers on how to formalise pro-ethical design through code (Lewis, 2020; Morley et al., 2021; Schönberger, 2019; Winfield & Jirotka, 2018); or (3) ethical principles lack ‘teeth’ and so do little to genuinely protect individuals, groups and societies from harm but do protect private companies by delaying the imposition of ‘harder’ methods of regulation (see for example Bresó et al., 2015; Clarke, 2019a; Rességuier & Rodrigues, 2020). Criticism of the field of AI-for-healthcare ethics also covers these three primary complaints, but adds to the list the lack of attention paid to complex optimisation trade-offs (Whittlestone et al., 2019), for example: should an ACDSS be recommended for use in the NHS when it demonstrates that each group benefits individually, even if overall the application widens inequities in health outcomes overall by benefitting one group more than another (Hardt & Chin, 2020)?

Although not particularly helpful for policymakers and legislators currently, this combined analysis of how AI poses a threat to the key principles of the ethics of care and the limitations of this principles-based approach is useful for this thesis in that it both highlights the risks that the identified ‘requirements’ should seek to mitigate and stresses the importance of making these requirements actionable. Thus, the key takeaway from this body of literature is twofold: (1) a comprehensive (or holistic) approach will be necessary when identifying the requirements to ensure points of tension are recognised and appropriately dealt with (La Fors et al., 2019); and (2) it will not be sufficient to simply list the identified requirements, it will be necessary to also identify how these requirements can be met by those developing ACDSS (M. Sendak et al., 2020).

The Medico-Legal Context

Remaining at the same LoA but moving from literature focusing on soft governance measures to literature focusing on hard governance measures (Floridi, 2018), there is a growing body of published work analysing the legal implications of using ACDSS. Given the multifaceted nature of ACDSS, it is unsurprising that this literature touches upon a wide range of legal fields, from legal philosophy, human rights, and tort law to contract, product, and medical device law (Perc et al., 2019). As there have not yet been any (known) cases of, for example, a patient being harmed as the result of a misdiagnosis by an ACDSS, most of the literature in this domain is speculative in nature – focused on identifying how the use of ACDSS might challenge the way in which existing legislation is traditionally applied. For example, Favaretto et al. (2019) are quick to point out that interpretations of anti-discrimination and data protection legislation will need updating as, in the context of AI (ACDSS), core notions such as motive and intention will no longer apply. In short, papers of this nature seek to answer the question ‘what are the appropriate legal and regulatory requirements (or standards) for ACDSS (Moses, 2016; Rhem, 2021)?

Currently, the literature is struggling to answer this question as most papers are either highly generic, for example focusing on the principle components of a proportionate governance framework for ACDSS (Morley & Joshi, 2019; Reddy et al., 2020; Sethi & Laurie, 2013), or very specific, such as the challenge posed by the increasingly blurred lines between frontline care and research (Braun et al., 2020) or the challenge posed by the updating nature of self-learning algorithms for existing static product-based medical device law (see for example Becker et al., 2019; Fraser et al., 2018; Gerke, Minssen, et al., 2020; Hwang et al., 2019; Lee et al., 2020; Smith et al., 2003). This ‘extremism’ limits the generalisability of the conclusions or findings of papers of this nature, particularly as they tend to be only applicable to one narrow jurisdiction, for example Germany in the case of (Molnár-Gábor, 2020) analysis of product liability and ACDSS. Additionally, it is clear that the literature is struggling to keep pace with changes in context and regulation. For example Carroll & Richardson (2016) and McCarthy & Lawford (2015) provide detailed analysis of how the European Union Medical Device Directive (MDD) could be applied to the development of AI, but the MDD was replaced by the Medical Device Regulation (MDR) in 2020 and there are, as of yet, no such applied analyses of the MDR. In the case of the NHS, this is further complicated by the fact that British law is likely to increasingly diverge from European law, as evidenced by the consultation recently published by the UK Medicines and Healthcare products Regulatory Agency (MHRA) (MHRA, 2021). Finally, this body of literature lacks balance, with considerably more attention paid to issues of medical malpractice liability than any other issue (see for example Price et al., 2019).

Overall, despite these limitations, the literature does make clear that the use of ACDSS in the NHS would significantly stretch current formal laws designed to protect from, and compensate for, harm (Clarke, 2019a). This creates a vacuum as self-regulatory ethical codes appear to be doing little to protect patients from the ethical harms of ACDSS and formal laws also appear inadequate. Clarke (2019a) suggests that what is needed is a co-regulatory approach (also referred to as the middle-out approach (Pagallo et al., 2019) which would involve legislation that establishes key elements such as authority, obligations and general principles that the regulatory system needs to satisfy, as well as sanctions and enforcement mechanisms, but delegates the responsibility of exactly how to meet these obligations to other key stakeholders who develop the details through in-depth consultation. Thus, the key takeaway is that he aim of the requirements should be to enable the NHS to adopt a middle-out approach to regulating the use of ACDSS

Finally, at the lowest LoA there is a rapidly growing literature, born out of the increasing desire to move AI out of ‘the model graveyard’ and into frontline care, that attempts to identify the steps and considerations involved in the practical implementation of AI systems for health (including ACDSS). Unsurprisingly, a not insignificant number the topics and themes raised in papers publishing within this domain, overlap with those raised in the ethics and medico-legal literature. For example, in their paper focused on developing a ‘roadmap for responsible machine learning for healthcare’ Wiens et al. (2019) conclude that what is needed more than anything else is regulatory incentives. Similarly, Seneviratne et al. (2020) in their paper on ‘bridging the implementation gap of machine learning in healthcare’ and Tran et al. (2019) in their paper describing a ‘framework for applied AI in healthcare’ both stay at the ethical principles LoA noting the importance of safety, trust and ethics as well as the importance of a robust regulatory strategy. Other papers, notably Chen et al's ( 2019) ‘how do develop machine learning models for healthcare;’ He et al's (2019) ‘practical implementation of AI technologies in medicine;’ Hopkins et al's (2020) ‘from AI algorithms to clinical application;’ and Shah et al's (2019) ‘translational perspective’ on AI and machine learning in clinical development, focus on listing the steps involved in developing and training a machine learning model from defining the problem to training the workforce to use the model. Lastly, there are papers which take a product development approach and seek to outline what is involved at each stage from ‘idea formation’ through to ‘post-market surveillance’ (see for example Higgins & Madai, 2020; M. P. Sendak et al., 2020).

For this thesis it is noteworthy that, even though all these papers are effectively trying to answer a version of the overarching research question (albeit from a primarily technical perspective), there is very little agreement between the different frameworks on what exactly the requirements for ‘successful’ implementation are. The only exception to this being the importance of technically and clinically evaluating the performance of any models destined to be used in frontline care. The greater alignment in this one area can be attributed to two influencing factors. First, since the adoption of evidence-based medicine in the 1990s, it has been near-impossible to recommend the use of any medical product, device, or treatment without there being verifiable evidence of its effectiveness generated through robust and independent evaluation (Burns et al., 2011; McCradden et al., 2020). Second, evaluation has been a key part of the literature on AI in healthcare since the 1970s (Miller, 1986). Yet, exactly what evaluation entails and what ‘counts’ as valid evidence in this context remains up for debate.

There is general agreement about what the aims of evaluation are. As Van Calster et al. (2019) explain, in general, evaluation aims to check the model’s ability ability to differentiate between individuals with and without disease (discrimination); the ‘accuracy’ of predictions (often called calibration, focused on recording the rate of type 1 and type 2 errors); and the extent to which the model has been ‘overfitted’ to either the dataset (and therefore the population) it was trained on or the operational environment in which it was trained (or example, the type of electronic health record software used). Others separately list the aims of user experience testing, listing the importance of evaluating a model’s accessibility and reliability (Magrabi et al., 2019); impact on workflow (Rigby et al., 2007); and user friendliness and maintainability (Rivers & Rivers, 2000). Disagreements emerge regarding the weight given to each of these components (i.e. whether user friendliness or accuracy is more important) and exactly how to evaluate each of them.

Early publications, for example Miller (1986) and Anderson & Aydin (1997), take a question-and-answer based approach listing the questions that should be asked at each stage of evaluation (from testing the model’s knowledge in silico to testing the model’s reliability in situ) and the steps involved. For example: Does the model work technically as designed? Is the model being used as anticipated? Does the model produce the desired results? And Does the system work better than the procedures it replaced? This latter question, is the focus of more recent papers which all dedicate a considerable amount of ‘ink’ to debating whether or not AI models should be subject to evaluation by randomised clinical trial (RCT). All scholars recognise the value that RCTs bring, and note that they are perhaps the only way to identify all possible deficiencies in a model’s performance, accepting that this is why RCTs are currently considered the ‘gold standard’ of evidence in medicine (see for example Angus, 2020; Nsoesie, 2018; Park & Han, 2018). However, many others are keen to highlight that whilst it might be possible to conduct an RCT for image-recognition algorithms and compare their performance to that of clinicians (Nagendran et al., 2020; Shen et al., 2019) it is considerably harder to replicate RCTs for predictive models based on EHR data (such as ACDSS) and especially hard to replicate RCTs when the ‘new’ intervention becomes the type of model used (for example random forest versus k-nearest neighbour)(Elkin et al., 2018). Limitations include the cost of RCTs, the need for a ‘golden baseline’ against which to test – which for many AI interventions does not exist, and the additional complexity of coping with versioning (Arora, 2020). These complexities, at least partially, explain why RCTs are not yet required for regulatory approval of AI systems for healthcare (Spiegelhalter, 2020) and why other papers avoid the topic completely and focus purely on debating the advantages and disadvantages of different statistical methods for evaluating sensitivity and specificity (see for example England & Cheng, 2019; Handelman et al., 2019).

From the perspective of this thesis, this lack of agreement is beginning to be problematic because it has started to make it difficult for policymakers to differentiate between ‘what is interesting’ and ‘what is necessary’ (Miller & Sittig, 1990) in the implementation and evaluation of AI for healthcare. This is evident from the recent proliferation of slightly-different-but-overlapping reporting and publishing guidelines that have recently been published, or announced. For example:

  • TRIPOD-ML statement (Transparent Reporting of a Multivariable Prediction Model for Individual Prognosis or Diagnosis – Machine Learning extension) (Collins & Moons, 2019): covering the reporting standards for in silico algorithm development i.e. train vs. test validation;

  • CONSORT-AI (Consolidated Standards of Reporting Trials – AI extension) and SPIRIT-AI statements (Standard Protocol Items: Recommendations for Interventional Trials- AI extension) (Liu et al., 2020; Park & Kressel, 2018; The CONSORT-AI and SPIRIT-AI Steering Group, 2019) covering the reporting standards of in situ RCTs and interventional trials involving AI or machine learning;

  • DECIDE-AI: developing reporting guidelines for a small-scale clinical evaluation that would sit between in silico (TRIPOD-ML) and in situ (CONSORT-AI and SPIRIT-AI) evaluations (The DECIDE-AI Steering Group, 2021)

  • STARD-AI (Standards for Reporting of Diagnostic Accuracy Studies- AI extension) covering reporting standards for studies specifically using statistical analysis to test diagnostic accuracy (Sounderajah et al., 2020)

  • QUADAS-AI (Quality Assessment for Diagnostic Accuracy Studies): proposed extension to the QUADAS tool used to assess bias and applicability in diagnostic accuracy studies (Sounderajah et al., 2021)

Although it is heartening to see this commitment to transparency and reporting of AI for healthcare, after the many years it took to convince pharmaceutical companies, regulators and journals of this need for drug trials (Goldacre, 2016), this proliferation of overlapping (and sometimes competing) standards and reporting guidelines is likely to hinder rather than help the ‘successful’ development, deployment and use of ACDSS. It will likely simply cause AI developers confusion and frustration, potentially encouraging them to disengage from the endeavour completely. Indeed, it already seems that this might be the case with a recent systematic review finding that most studies developing or validating ACDSS failed to use any of the available reporting guidelines and so lacked the adequate detail for assessment, interpretation, and reproducibility (Yusuf et al., 2020). Thus the key takeaway here is it will be essential to seek to integrate and synthesis the various ‘evaluation’ requirements from reporting, regulatory, functional, economic, and ethical perspectives and ensure that they take into account the full range of potential issues from objective to subjective, from UX to knowledge accuracy, and from value for money to impact on outcomes (Miller, 1986; Reddy et al., 2021).

Time for a new approach

From this brief overview, it is clear that it is not because nobody has previously sought to ask ‘what are the design requirements for the successful development, deployment, and use of ACDSS in the NHS,’ that there is not yet a comprehensive answer. The literature has after all revealed several key insights:

  • To ensure success, it will be necessary to consider the requirements from the perspective of individual professionals and patients; healthcare groups; individual organisations providing NHS services; and the wider healthcare system.

  • It will not be sufficient to identify the individual requirements, it will also be necessary to consider the interactions between the different requirements.

  • Requirements may vary depending on the type of model used in the ACDSS and thus it will be necessary to be specific about the limits of the applicability of the identified requirements.

  • As well as identifying the interactions between the requirements, it will also be necessary to identify when they come into conflict with each other and how to deal with these conflicts.

  • As ACDSS formalise rules through code, it will be necessary to not just list the requirements, but also necessary to describe how they can be met by those developing ACDSS.

  • The aim of the requirements should be to enable the NHS to adopt a middle-out approach to regulating the use of ACDSS.

  • The requirements must be extracted and synthesised from a variety of ‘evaluation’ perspectives from reporting, regulatory, functional, and economic through to ethical and clinical

Each of these insights, however, came from a different body of literature focused at either a very high LoA of principles and hypothetical possibilities (Arora, 2020) or at a very low technical specifications LoA (Sendak et al., 2020) with little or no consideration given to either horizontal (for example between ethics and regulation, and diffusion of innovation models) or vertical (between the two LoAs) interactions. As such these insights (and other) insights from the existing literature provide a foundation for thinking about how ACDSS might be successfully designed and adopted in the NHS, but leave many questions open about the specific mechanisms involved in the translation process from ideation to implementation (Cohen et al., 2020; Sendak et al., 2020). In short, current understanding of the ‘design requirements for successful development, deployment and use of ACDSS is too simplistic, it leaves out ‘most of what matters’ – especially context and process (Tsoukas, 2017).

This simplistic understanding has led to the development of simplistic deterministic implementation theories that assume if variable X is fixed than outcome Y will happen (Greenhalgh et al., 2004; Greenhalgh & Swinglehurst, 2011). For example, if risk prediction algorithms are validated against a national NHS dataset, then clinicians will have no reason not to ‘trust’ ACDSS and adoption and implementation will be frictionless as a result. Such deterministic theories assume that a one-size-fits-all inflexible ‘blueprint’ for the implementation of ACDSS can be developed, and fail to acknowledge the fact that success can depend on complex interactions between key personalities, implicit social norms, hierarchies of power and accountability mechanisms, healthcare policies, government regulation embedded culture, technical infrastructure, the wider psycho-social environment, and more (Poland et al., 2005). Consequently, policymakers have been left without a conceptual map to help them identify, describe, explain, and control, and so identification of the key design requirements for the successful development, deployment and use of ACDSS has thus far been reactive, ad-hoc, and fragmented (Char et al., 2020; May, 2013). It is not surprising, therefore, that AI practitioners and other key stakeholders feel as though current policies designed to increase the adoption and use of ACDSS, such as the recently launched National Strategy for AI in Health and Social Care (NHSX, 2021), do not reflect the realities of the real world (Tsoukas, 2017) and so will do little to close the ever-widening implementation gap (Gillan et al., 2019). Thus, the overarching research question represents not a gap, but a problematisation; ‘an attempt to know how and to what extent it might be possible to think differently instead of what is already known’ and in so doing, challenge the outlined assumptions of current approaches to implementing ACDSS in the NHS (Davis, 1971; Sandberg & Alvesson, 2011). In short, it seeks to acknowledge the fact that if the implementation gap is to be closed, then a deeper understanding of the complex and situated nature of ACDSS will be needed, otherwise policymakers’ vision of its rapid scale-up and spread will never be realised (Shaw et al., 2017).

Answering the research question, therefore, requires the development of a complexity-informed implementation theory (Greenhalgh & Papoutsi, 2018) capable of explaining how and why ACDSS might succeed or fail in different settings (Cresswell et al., 2010; Li et al., 2020). This will involve adopting what Tsoukas, (2017) refers to as a ‘conjunctive’ approach to theorising. In other words, it will involve adopting an approach to theorising which seeks to connect concepts traditionally used in a disjointed manner, so that the whole policy system – including its hidden agendas, and crucially, its effects on the successful implementation of ACDSS – can be described and understood (Haynes, 2008; Shaw & Stahl, 2011). Approaching the identification of the design requirements for the implementation of ACDSS in this way will ensure all the technical, sociocultural, ethical, regulatory, and legal conditions essential for ‘success’ are given equal consideration (Kaminskas & Darulis, 2007; Ngiam & Khor, 2019), and, in so doing, help policymakers safeguard against unintended (re-ontologising) consequences of ACDSS in the NHS (O’Doherty et al., 2016).


  1. Abouelmehdi, K., Beni-Hssane, A., Khaloufi, H., & Saadi, M. (2017). Big data security and privacy in healthcare: A Review. Procedia Computer Science, 113, 73–80.

  2. Allock, C., Dorman, F., Taunt, R., & Dixon, J. (2015). Constructive comfort: Accelerating change in the NHS. The Health Foundation.

  3. Anderson, J. G., & Aydin, C. E. (1997). Evaluating the impact of health care information systems. International Journal of Technology Assessment in Health Care, 13(2), 380–393. Scopus.

  4. Andorno, R. (2004). The right not to know: An autonomy based approach. Journal of Medical Ethics, 30(5), 435–439.

  5. Angus, D. C. (2020). Randomized Clinical Trials of Artificial Intelligence. JAMA.

  6. Arora, A. (2020). Conceptualising Artificial Intelligence as a Digital Healthcare Innovation: An Introductory Review. Medical Devices: Evidence and Research, Volume 13, 223–230.

  7. Ashburner, L., Ferlie, E., & FitzGerald, L. (1996). Organizational Transformation and Top-Down Change: The Case of the NHS. British Journal of Management, 7(1), 1–16.

  8. Asthana, S., Jones, R., & Sheaff, R. (2019). Why does the NHS struggle to adopt eHealth innovations? A review of macro, meso and micro factors. BMC Health Services Research, 19(1), 984.

  9. Bartoletti, I. (2019). AI in Healthcare: Ethical and Privacy Challenges. In D. Riaño, S. Wilk, & A. ten Teije (Eds.), Artificial Intelligence in Medicine (Vol. 11526, pp. 7–10). Springer International Publishing.

  10. Bates, D. W., Kuperman, G. J., Wang, S., Gandhi, T., Kittler, A., Volk, L., Spurr, C., Khorasani, R., Tanasijevic, M., & Middleton, B. (2003). Ten Commandments for Effective Clinical Decision Support: Making the Practice of Evidence-based Medicine a Reality. Journal of the American Medical Informatics Association, 10(6), 523–530.

  11. Becker, K., Lipprandt, M., Röhrig, R., & Neumuth, T. (2019). Digital health—Software as a medical device in focus of the medical device regulation (MDR). IT - Information Technology, 61(5–6), 211–218. Scopus.

  12. Benson, T. (2019). Digital innovation evaluation: User perceptions of innovation readiness, digital confidence, innovation adoption, user experience and behaviour change. BMJ Health & Care Informatics, 26(1), 0.5-0.

  13. Best, A., Greenhalgh, T., Lewis, S., Saul, J. E., Carroll, S., & Bitz, J. (2012). Large-System Transformation in Health Care: A Realist Review: Large-System Transformation in Health Care. Milbank Quarterly, 90(3), 421–456.

  14. Braun, M., Hummel, P., Beck, S., & Dabrock, P. (2020). Primer on an ethics of AI-based decision support systems in the clinic. Journal of Medical Ethics, medethics-2019-105860.

  15. Bresó, A., Sáez, C., Vicente, J., Larrinaga, F., Robles, M., & García-Gómez, J. M. (2015). Knowledge-based personal health system to empower outpatients of diabetes mellitus by means of p4 medicine (Vol. 1246).

  16. Brill, S. B., Moss, K. O., & Prater, L. (2019). Transformation of the Doctor–Patient Relationship: Big Data, Accountable Care, and Predictive Health Analytics. HEC Forum.

  17. Burns, P. B., Rohrich, R. J., & Chung, K. C. (2011). The Levels of Evidence and Their Role in Evidence-Based Medicine: Plastic and Reconstructive Surgery, 128(1), 305–310.

  18. Carroll, N., & Richardson, I. (2016). Software-as-a-Medical Device: Demystifying Connected Health regulations. Journal of Systems and Information Technology, 18(2), 186–215. Scopus.

  19. Char, D. S., Abràmoff, M. D., & Feudtner, C. (2020). Identifying Ethical Considerations for Machine Learning Healthcare Applications. The American Journal of Bioethics, 20(11), 7–17.

  20. Chen, P.-H. C., Liu, Y., & Peng, L. (2019). How to develop machine learning models for healthcare. Nature Materials, 18(5), 410–414.

  21. Clarke, R. (2019). Regulatory alternatives for AI. Computer Law & Security Review.

  22. Cohen, I. G., Evgeniou, T., Gerke, S., & Minssen, T. (2020). The European artificial intelligence strategy: Implications and challenges for digital health. The Lancet Digital Health, 2(7), e376–e379. Scopus.

  23. Collins, G. S., & Moons, K. G. M. (2019). Reporting of artificial intelligence prediction models. The Lancet, 393(10181), 1577–1579.

  24. Cresswell, K. M., Worth, A., & Sheikh, A. (2010). Actor-Network Theory and its role in understanding the implementation of information technology developments in healthcare. BMC Medical Informatics and Decision Making, 10(1), 67.

  25. Davis, F. D. (1989). Perceived Usefulness, Perceived Ease of Use, and User Acceptance of Information Technology. MIS Quarterly, 13(3), 319.

  26. Davis, M. S. (1971). That’s Interesting!: Towards a Phenomenology of Sociology and a Sociology of Phenomenology. Philosophy of the Social Sciences, 1(2), 309–344.

  27. De Clercq, P. A., Blom, J. A., Korsten, H. H. M., & Hasman, A. (2004). Approaches for creating computer-interpretable guidelines that facilitate decision support. Artificial Intelligence in Medicine, 31(1), 1–27. Scopus.

  28. Eccles, M., Grimshaw, J., Walker, A., Johnston, M., & Pitts, N. (2005). Changing the behavior of healthcare professionals: The use of theory in promoting the uptake of research findings. Journal of Clinical Epidemiology, 58(2), 107–112.

  29. Elkin, P. L., Schlegel, D. R., Anderson, M., Komm, J., Ficheur, G., & Bisson, L. (2018). Artificial Intelligence: Bayesian versus Heuristic Method for Diagnostic Decision Support. Applied Clinical Informatics, 9(2), 432–439.

  30. England, J. R., & Cheng, P. M. (2019). Artificial Intelligence for Medical Image Analysis: A Guide for Authors and Reviewers. American Journal of Roentgenology, 212(3), 513–519.

  31. Favaretto, M., De Clercq, E., & Elger, B. S. (2019). Big Data and discrimination: Perils, promises and solutions. A systematic review. Journal of Big Data, 6(1), 12.

  32. Ferlie, E., Crilly, T., Jashapara, A., & Peckham, A. (2012). Knowledge mobilisation in healthcare: A critical review of health sector and generic management literature. Social Science & Medicine, 74(8), 1297–1304.

  33. Findley, J., Woods, A., Robertson, C., & Slepian, M. (2020). Keeping the Patient at the Center of Machine Learning in Healthcare. The American Journal of Bioethics, 20(11), 54–56.

  34. Floridi, L. (2008). The Method of Levels of Abstraction. Minds and Machines, 18(3), 303–329.

  35. Floridi, L. (2018). Soft ethics, the governance of the digital and the General Data Protection Regulation. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 376(2133), 20180081.

  36. Floridi, L. (2019). Translating Principles into Practices of Digital Ethics: Five Risks of Being Unethical. Philosophy & Technology, s13347-019-00354–x.

  37. Floridi, L., & Cowls, J. (2019). A Unified Framework of Five Principles for AI in Society. Harvard Data Science Review.

  38. Fraser, A. G., Butchart, E. G., Szymański, P., Caiani, E. G., Crosby, S., Kearney, P., & Van de Werf, F. (2018). The need for transparency of clinical evidence for medical devices in Europe. The Lancet, 392(10146), 521–530. Scopus.

  39. Gerke, S., Minssen, T., & Cohen, G. (2020). Ethical and legal challenges of artificial intelligence-driven healthcare. In Artificial Intelligence in Healthcare (pp. 295–336). Elsevier.

  40. Gianfrancesco, M. A., Tamang, S., Yazdany, J., & Schmajuk, G. (2018). Potential biases in machine learning algorithms using electronic health record data. JAMA Internal Medicine, 178(11), 1544–1547.

  41. Giddens, A. (1984). The constitution of society [electronic resource] outline of the theory of structuration. Polity Press.

  42. Gillan, C., Milne, E., Harnett, N., Purdie, T. G., Jaffray, D. A., & Hodges, B. (2019). Professional implications of introducing artificial intelligence in healthcare: An evaluation using radiation medicine as a testing ground. Journal of Radiotherapy in Practice, 18(1), 5–9.

  43. Goldacre, B. (2016). Make journals report clinical trials properly. Nature, 530(7588), 7–7.

  44. Greene, D., Hoffmann, A. L., & Stark, L. (2019). Better, Nicer, Clearer, Fairer: A Critical Assessment of the Movement for Ethical Artificial Intelligence and Machine Learning. Hawaii International Conference on System Sciences.

  45. Greenhalgh, T. (2018). How to improve success of technology projects in health and social care. Public Health Research & Practice, 28(3).

  46. Greenhalgh, T., & Papoutsi, C. (2018). Studying complexity in health services research: Desperately seeking an overdue paradigm shift. BMC Medicine, 16(1), 95, s12916-018-1089–4.

  47. Greenhalgh, T., Robert, G., Macfarlane, F., Bate, P., & Kyriakidou, O. (2004). Diffusion of Innovations in Service Organizations: Systematic Review and Recommendations. The Milbank Quarterly, 82(4), 581–629.

  48. Greenhalgh, T., & Stones, R. (2010). Theorising big IT programmes in healthcare: Strong structuration theory meets actor-network theory. Social Science & Medicine, 70(9), 1285–1294.

  49. Greenhalgh, T., & Swinglehurst, D. (2011). Studying technology use as social practice: The untapped potential of ethnography. BMC Medicine, 9(1), 45.

  50. Greenhalgh, T., Wherton, J., Papoutsi, C., Lynch, J., Hughes, G., A’Court, C., Hinder, S., Fahy, N., Procter, R., & Shaw, S. (2017). Beyond Adoption: A New Framework for Theorizing and Evaluating Nonadoption, Abandonment, and Challenges to the Scale-Up, Spread, and Sustainability of Health and Care Technologies. Journal of Medical Internet Research, 19(11), e367.

  51. Greenhalgh, T., Wherton, J., Papoutsi, C., Lynch, J., Hughes, G., A’Court, C., Hinder, S., Procter, R., & Shaw, S. (2018). Analysing the role of complexity in explaining the fortunes of technology programmes: Empirical application of the NASSS framework. BMC Medicine, 16(1), 66.

  52. Hagendorff, T. (2020). The Ethics of AI Ethics: An Evaluation of Guidelines. Minds and Machines, 30(1), 99–120.

  53. Hague, D. C. (2019). Benefits, Pitfalls, and Potential Bias in Health Care AI. North Carolina Medical Journal, 80(4), 219–223.

  54. Handelman, G. S., Kok, H. K., Chandra, R. V., Razavi, A. H., Huang, S., Brooks, M., Lee, M. J., & Asadi, H. (2019). Peering Into the Black Box of Artificial Intelligence: Evaluation Metrics of Machine Learning Methods. American Journal of Roentgenology, 212(1), 38–43.

  55. Hardt, M., & Chin, M. H. (2020). It is Time for Bioethicists to Enter the Arena of Machine Learning Ethics. The American Journal of Bioethics, 20(11), 18–20.

  56. Harerimana, G., Jang, B., Kim, J. W., & Park, H. K. (2018). Health Big Data Analytics: A Technology Survey. IEEE Access, 6, 65661–65678.

  57. Haynes, P. (2008). Complexity Theory and Evaluation in Public Management: A qualitative systems approach. Public Management Review, 10(3), 401–419.

  58. He, J., Baxter, S. L., Xu, J., Xu, J., Zhou, X., & Zhang, K. (2019). The practical implementation of artificial intelligence technologies in medicine. Nature Medicine, 25(1), 30–36.

  59. Higgins, D., & Madai, V. I. (2020). From Bit to Bedside: A Practical Framework for Artificial Intelligence Product Development in Healthcare. Advanced Intelligent Systems, 2(10), 2000052.

  60. Hwang, T. J., Kesselheim, A. S., & Vokinger, K. N. (2019). Lifecycle Regulation of Artificial Intelligence– and Machine Learning–Based Software Devices in Medicine. JAMA.

  61. Jill Hopkins, J., Keane, P. A., & Balaskas, K. (2020). Delivering personalized medicine in retinal care: From artificial intelligence algorithms to clinical application. Current Opinion in Ophthalmology, 31(5), 329–336.

  62. Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1(9), 389–399.

  63. Johnson, M. P., Zheng, K., & Padman, R. (2014). Modeling the longitudinality of user acceptance of technology with an evidence-adaptive clinical decision support system. Decision Support Systems, 57(1), 444–453.

  64. Kaminskas, R., & Darulis, Z. (2007). Peculiarities of medical sociology: Application of social theories in analyzing health and medicine. Medicina (Kaunas, Lithuania), 43(2), 110–117.

  65. Kerasidou, A. (2020). Artificial intelligence and the ongoing need for empathy, compassion and trust in healthcare. Bulletin of the World Health Organization, 98(4), 245–250.

  66. Klein, H. K., & Kleinman, D. L. (2002). The Social Construction of Technology: Structural Considerations. Science, Technology, & Human Values, 27(1), 28–52.

  67. La Fors, K., Custers, B., & Keymolen, E. (2019). Reassessing values for emerging big data technologies: Integrating design-based and application-based approaches. Ethics and Information Technology.

  68. Lee, C. I., Houssami, N., Elmore, J. G., & Buist, D. S. M. (2020). Pathways to breast cancer screening artificial intelligence algorithm validation. The Breast, 52, 146–149.

  69. Levy-Fix, G., Kuperman, G. J., & Elhadad, N. (2019). Machine Learning and Visualization in Clinical Decision Support: Current State and Future Directions. ArXiv:1906.02664 [Cs, Stat].

  70. Lewis, A. C. F. (2020). Where Bioethics Meets Machine Ethics. The American Journal of Bioethics, 20(11), 22–24.

  71. Li, R. C., Asch, S. M., & Shah, N. H. (2020). Developing a delivery science for artificial intelligence in healthcare. Npj Digital Medicine, 3(1), 107.

  72. Liu, X., Cruz Rivera, S., Moher, D., Calvert, M. J., Denniston, A. K., Ashrafian, H., Beam, A. L., Chan, A.-W., Collins, G. S., Deeks, A. D. J., ElZarrad, M. K., Espinoza, C., Esteva, A., Faes, L., Ferrante di Ruffano, L., Fletcher, J., Golub, R., Harvey, H., Haug, C., … Yau, C. (2020). Reporting guidelines for clinical trial reports for interventions involving artificial intelligence: The CONSORT-AI extension. The Lancet Digital Health, 2(10), e537–e548. Scopus.

  73. Ljubicic, V., Ketikidis, P. H., & Lazuras, L. (2020). Drivers of intentions to use healthcare information systems among health and care professionals. Health Informatics Journal, 26(1), 56–71.

  74. Macfarlane, F., Barton-Sweeney, C., Woodard, F., & Greenhalgh, T. (2013). Achieving and sustaining profound institutional change in healthcare: Case study using neo-institutional theory. Social Science & Medicine, 80, 10–18.

  75. Magrabi, F., Ammenwerth, E., McNair, J. B., De Keizer, N. F., Hyppönen, H., Nykänen, P., Rigby, M., Scott, P. J., Vehko, T., Wong, Z. S.-Y., & Georgiou, A. (2019). Artificial Intelligence in Clinical Decision Support: Challenges for Evaluating AI and Practical Implications. Yearbook of Medical Informatics, 28(1), 128–134. Scopus.

  76. Manchikanti, L., & Hirsch, J. A. (2015). A case for restraint of explosive growth of health information technology: First, do no harm. Pain Physician, 18(3), E293–E298. Scopus.

  77. May, C. (2013). Towards a general theory of implementation. Implementation Science, 8(1), 18.

  78. McCarthy, A. D., & Lawford, P. V. (2015). Standalone medical device software: The evolving regulatory framework. Journal of Medical Engineering and Technology, 39(7), 441–447. Scopus.

  79. McCradden, M. D., Stephenson, E. A., & Anderson, J. A. (2020). Clinical research underlies ethical integration of healthcare artificial intelligence. Nature Medicine, 26(9), 1325–1326.

  80. MHRA. (2021). Consultation on the future regulation of medical devices in the United Kingdom. Medicines and Healthcare products Regulatory Agency.

  81. Milano, S., Taddeo, M., & Floridi, L. (2020). Recommender systems and their ethical challenges. AI and Society. Scopus.

  82. Miller, P. L. (1986). The evaluation of artificial intelligence systems in medicine. Computer Methods and Programs in Biomedicine, 22(1), 3–11.

  83. Miller, P. L., & Sittig, D. F. (1990). The evaluation of clinical decision support systems: What is necessary versus what is interesting. Medical Informatics, 15(3), 185–190.

  84. Mittelstadt, B. (2019). Principles alone cannot guarantee ethical AI. Nature Machine Intelligence, 1(11), 501–507.

  85. Molnár-Gábor, F. (2020). Artificial Intelligence in Healthcare: Doctors, Patients and Liabilities. In T. Wischmeyer & T. Rademacher (Eds.), Regulating Artificial Intelligence (pp. 337–360). Springer International Publishing.

  86. Morley, J., Elhalal, A., Garcia, F., Kinsey, L., Mökander, J., & Floridi, L. (2021). Ethics as a Service: A Pragmatic Operationalisation of AI Ethics. Minds and Machines, 31(2), 239–256.

  87. Morley, J., & Floridi, L. (2020). An ethically mindful approach to AI for health care. The Lancet, 395(10220), 254–255.

  88. Morley, J., Floridi, L., Kinsey, L., & Elhalal, A. (2019). From what to how: An initial review of publicly available AI ethics tools, methods and research to translate principles into practices. Science and Engineering Ethics, 26(4), 2141–2168.

  89. Morley, J., & Joshi, I. (2019). Developing effective policy to support Artificial Intelligence in Health and Care. Eurohealth, 25(2).

  90. Moses, L. (2016). Regulating in the Face of Sociotechnical Change (R. Brownsword, E. Scotford, & K. Yeung, Eds.; Vol. 1). Oxford University Press.

  91. Nagendran, M., Chen, Y., Lovejoy, C. A., Gordon, A. C., Komorowski, M., Harvey, H., Topol, E. J., Ioannidis, J. P. A., Collins, G. S., & Maruthappu, M. (2020). Artificial intelligence versus clinicians: Systematic review of design, reporting standards, and claims of deep learning studies. BMJ, m689.

  92. Ngiam, K. Y., & Khor, I. W. (2019). Big data and machine learning algorithms for health-care delivery. The Lancet Oncology, 20(5), e262–e273.

  93. NHSX. (2021). The National Strategy for AI in Health and Social Care. NHSX.

  94. Nsoesie, E. O. (2018). Evaluating Artificial Intelligence Applications in Clinical Settings. JAMA Network Open, 1(5), e182658.

  95. O’Doherty, K. C., Christofides, E., Yen, J., Bentzen, H. B., Burke, W., Hallowell, N., Koenig, B. A., & Willison, D. J. (2016). If you build it, they will come: Unintended future uses of organised health data collections Donna Dickenson, Sandra Soo-Jin Lee, and Michael Morrison. BMC Medical Ethics, 17(1).

  96. Pagallo, U., Casanovas, P., & Madelin, R. (2019). The middle-out approach: Assessing models of legal governance in data protection, artificial intelligence, and the Web of Data. The Theory and Practice of Legislation, 7(1), 1–25.

  97. Park, S. H., & Han, K. (2018). Methodologic Guide for Evaluating Clinical Performance and Effect of Artificial Intelligence Technology for Medical Diagnosis and Prediction. Radiology, 286(3), 800–809.

  98. Park, S. H., & Kressel, H. Y. (2018). Connecting Technological Innovation in Artificial Intelligence to Real-world Medical Practice through Rigorous Clinical Validation: What Peer-reviewed Medical Journals Could Do. Journal of Korean Medical Science, 33(22), e152.

  99. Perc, M., Ozer, M., & Hojnik, J. (2019). Social and juristic challenges of artificial intelligence. Palgrave Communications, 5(1).

  100. Pettigrew, A., McKEE, L., & Ferlie, E. (1988). Understanding Change in the NHS. Public Administration, 66(3), 297–317.

  101. Pinch, T. J., & Bijker, W. E. (1984). The Social Construction of Facts and Artefacts: Or How the Sociology of Science and the Sociology of Technology might Benefit Each Other. Social Studies of Science, 14(3), 399–441.

  102. Poland, B., Lehoux, P., Holmes, D., & Andrews, G. (2005). How place matters: Unpacking technology and power in health and social care. Health & Social Care in the Community, 13(2), 170–180.

  103. Price, W. N., & Cohen, I. G. (2019). Privacy in the age of medical big data. Nature Medicine, 25(1), 37–43.

  104. Price, W. N., Gerke, S., & Cohen, I. G. (2019). Potential Liability for Physicians Using Artificial Intelligence. JAMA.

  105. Reddy, S., Allan, S., Coghlan, S., & Cooper, P. (2020). A governance model for the application of AI in health care. Journal of the American Medical Informatics Association, 27(3), 491–497. Scopus.

  106. Reddy, S., Rogers, W., Makinen, V.-P., Coiera, E., Brown, P., Wenzel, M., Weicken, E., Ansari, S., Mathur, P., Casey, A., & Kelly, B. (2021). Evaluation framework to guide implementation of AI systems into healthcare settings. BMJ Health & Care Informatics, 28(1), e100444.

  107. Reisman, Y. (1996). Computer-based clinical decision aids. A review of methods and assessment of systems. Medical Informatics, 21(3), 179–197. Scopus.

  108. Rességuier, A., & Rodrigues, R. (2020). AI ethics should not remain toothless! A call to bring back the teeth of ethics. Big Data & Society, 7(2), 205395172094254.

  109. Rhem, A. J. (2021). AI ethics and its impact on knowledge management. AI and Ethics, 1(1), 33–37.

  110. Rigby, M. J., Hulm, C., Detmer, D., & Buccoliero, L. (2007). Enabling the safe and effective implementation of health informatics systems-validating rolling out the ECDL/ICDL health supplement. Studies in Health Technology and Informatics, 129, 1347–1351.

  111. Rivers, J. A., & Rivers, P. A. (2000). The ABCs for deciding on a decision support system in the health care industry. Journal of Health and Human Services Administration, 22(3), 346–353. Scopus.

  112. Rogers, E. M. (2003). Diffusion of innovations (5th ed). Free Press.

  113. Sandberg, J., & Alvesson, M. (2011). Ways of constructing research questions: Gap-spotting or problematization? Organization, 18(1), 23–44.

  114. Schiff, D., Biddle, J., Borenstein, J., & Laas, K. (2020). What’s next for AI ethics, policy, and governance? A global overview. 153–158. Scopus.

  115. Schloemer, T., & Schröder-Bäck, P. (2018). Criteria for evaluating transferability of health interventions: A systematic review and thematic synthesis. Implementation Science, 13(1), 88.

  116. Schönberger, D. (2019). Artificial intelligence in healthcare: A critical analysis of the legal and ethical implications. International Journal of Law and Information Technology, 27(2), 171–203.

  117. Scott, T. (2003). Implementing culture change in health care: Theory and practice. International Journal for Quality in Health Care, 15(2), 111–118.

  118. Sendak, M., Elish, M. C., Gao, M., Futoma, J., Ratliff, W., Nichols, M., Bedoya, A., Balu, S., & O’Brien, C. (2020). ‘The Human Body is a Black Box’: Supporting Clinical Decision-Making with Deep Learning. Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, 99–109.

  119. Sendak, M. P., D’Arcy, J., Kashyap, S., Gao, M., Nichols, M., Corey, K., Ratliff, W., & Balu, S. (2020). A path for translation of machine learning products into healthcare delivery. EMJ Innov, 10, 19–00172.

  120. Seneviratne, M. G., Shah, N. H., & Chu, L. (2020). Bridging the implementation gap of machine learning in healthcare. BMJ Innovations, 6(2), 45–47.

  121. Sethi, N., & Laurie, G. T. (2013). Delivering proportionate governance in the era of eHealth: Making linkage and privacy work together. Medical Law International, 13(2–3), 168–204.

  122. Shah, P., Kendall, F., Khozin, S., Goosen, R., Hu, J., Laramie, J., Ringel, M., & Schork, N. (2019). Artificial intelligence and machine learning in clinical development: A translational perspective. Npj Digital Medicine, 2(1), 69.

  123. Shaw, J., Shaw, S., Wherton, J., Hughes, G., & Greenhalgh, T. (2017). Studying Scale-Up and Spread as Social Practice: Theoretical Introduction and Empirical Case Study. Journal of Medical Internet Research, 19(7), e244.

  124. Shaw, M. C., & Stahl, B. C. (2011). On quality and communication: The relevance of critical Theory to Health Informatics. Journal of the Association of Information Systems, 12(3), 255–273.

  125. Shen, J., Zhang, C. J. P., Jiang, B., Chen, J., Song, J., Liu, Z., He, Z., Wong, S. Y., Fang, P.-H., & Ming, W.-K. (2019). Artificial Intelligence Versus Clinicians in Disease Diagnosis: Systematic Review. JMIR Medical Informatics, 7(3), e10010.

  126. Shortliffe, E. H., & Sepúlveda, M. J. (2018). Clinical Decision Support in the Era of Artificial Intelligence. JAMA, 320(21), 2199.

  127. Sim, I., Gorman, P., Greenes, R. A., Haynes, R. B., Kaplan, B., Lehmann, H., & Tang, P. C. (2001). Clinical decision support systems for the practice of evidence-based medicine. Journal of the American Medical Informatics Association, 8(6), 527–534.

  128. Smith, A. E., Nugent, C. D., & McClean, S. I. (2003). Evaluation of inherent performance of intelligent medical decision support systems: Utilising neural networks as an example. Artificial Intelligence in Medicine, 27(1), 1–27.

  129. Sounderajah, V., Ashrafian, H., Aggarwal, R., De Fauw, J., Denniston, A. K., Greaves, F., Karthikesalingam, A., King, D., Liu, X., Markar, S. R., McInnes, M. D. F., Panch, T., Pearson-Stuttard, J., Ting, D. S. W., Golub, R. M., Moher, D., Bossuyt, P. M., & Darzi, A. (2020). Developing specific reporting guidelines for diagnostic accuracy studies assessing AI interventions: The STARD-AI Steering Group. Nature Medicine, 26(6), 807–808.

  130. Sounderajah, V., Ashrafian, H., Rose, S., Shah, N. H., Ghassemi, M., Golub, R., Kahn, C. E., Esteva, A., Karthikesalingam, A., Mateen, B., Webster, D., Milea, D., Ting, D., Treanor, D., Cushnan, D., King, D., McPherson, D., Glocker, B., Greaves, F., … Darzi, A. (2021). A quality assessment tool for artificial intelligence-centered diagnostic test accuracy studies: QUADAS-AI. Nature Medicine, 27(10), 1663–1665.

  131. Spiegelhalter, D. (2020). Should We Trust Algorithms? Harvard Data Science Review.

  132. Terzis, P. (2020). Onward for the freedom of others: Marching beyond the AI ethics. 220–229. Scopus.

  133. The CONSORT-AI and SPIRIT-AI Steering Group. (2019). Reporting guidelines for clinical trials evaluating artificial intelligence interventions are needed. Nature Medicine.

  134. The DECIDE-AI Steering Group. (2021). DECIDE-AI: New reporting guidelines to bridge the development-to-implementation gap in clinical artificial intelligence. Nature Medicine.

  135. Tran, T., Paige, G., Kaleigh, J.-C., & Adriana, I. (2019). A Framework for Applied AI in Healthcare. Studies in Health Technology and Informatics, 1993–1994.

  136. Tsoukas, H. (2017). Don’t Simplify, Complexify: From Disjunctive to Conjunctive Theorizing in Organization and Management Studies: Don’t simplify, complexify. Journal of Management Studies, 54(2), 132–153.

  137. Van Calster, B., Wynants, L., Timmerman, D., Steyerberg, E. W., & Collins, G. S. (2019). Predictive analytics in health care: How can we know it works? Journal of the American Medical Informatics Association, 26(12), 1651–1654.

  138. Venkatesh, Morris, Davis, & Davis. (2003). User Acceptance of Information Technology: Toward a Unified View. MIS Quarterly, 27(3), 425.

  139. Venkatesh, V., & Davis, F. D. (2000). A Theoretical Extension of the Technology Acceptance Model: Four Longitudinal Field Studies. Management Science, 46(2), 186–204.

  140. Vourgidis, I., Mafuma, S. J., Wilson, P., Carter, J., & Cosma, G. (2019). Medical expert systems – A study of trust and acceptance by healthcare stakeholders. Advances in Intelligent Systems and Computing, 840, 108–119.

  141. Wainwright, D. W., & Waring, T. S. (2007). The Application and Adaptation of a Diffusion of Innovation Framework for Information Systems Research in NHS General Medical Practice. Journal of Information Technology, 22(1), 44–58.

  142. Ward, R. (2013). The application of technology acceptance and diffusion of innovation models in healthcare informatics. Health Policy and Technology, 2(4), 222–228.

  143. Whittlestone, J., Alexandrova, A., Nyrup, R., & Cave, S. (2019). The role and limits of principles in AI ethics: Towards a focus on tensions. 195–200.

  144. Wiens, J., Saria, S., Sendak, M., Ghassemi, M., Liu, V. X., Doshi-Velez, F., Jung, K., Heller, K., Kale, D., Saeed, M., Ossorio, P. N., Thadaney-Israni, S., & Goldenberg, A. (2019). Do no harm: A roadmap for responsible machine learning for health care. Nature Medicine.

  145. Winfield, A. F. T., & Jirotka, M. (2018). Ethical governance is essential to building trust in robotics and artificial intelligence systems. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 376(2133), 20180085.

  146. Xafis, V., & Labude, M. K. (2019). Openness in Big Data and Data Repositories: The Application of an Ethics Framework for Big Data in Health and Research. Asian Bioethics Review, 11(3), 255–273.

  147. Yusuf, M., Atal, I., Li, J., Smith, P., Ravaud, P., Fergie, M., Callaghan, M., & Selfe, J. (2020). Reporting quality of studies using machine learning models for medical diagnosis: A systematic review. BMJ Open, 10(3), e034568.

26 views0 comments
Post: Blog2_Post
bottom of page