The greatest threat to AI uptake across the healthcare system is the off switch if clinicians do not see the benefit of it, a new white paper has suggested.
The paper said if clinicians see the technology as burdensome, unfit for purpose or are wary of how it will impact on their decision-making, their patients or their licenses, they will not use it.
The white paper, from the research initiative MPS Foundation, which is part of Medical Protection Society, looked at the impact of AI on clinicians, building on the results of its Shared Care AI Role Evaluation research project.
It also highlighted a concern that clinicians would absorb legal responsibility for AI-influenced decisions, even when the systems themselves could be flawed.
The report said: ‘We are at a turning point in the deployment of medical AI, as it moves from the margins to the mainstream of healthcare delivery. To unlock and realise AI’s potential for patients, it is essential that we implement and deploy AI technologies in ways that work for those using them – the clinicians.
‘If we fail to do so, we risk exacerbating the very challenges AI is supposed to address: clinician burnout, inefficiency, and uneven patient experiences and outcomes.’
It outlined several recommendations (see box) for the government and regulators, including that AI tools should provide clinicians with information and not recommendations, and that clinicians need to have training to make them comfortable accepting responsibility for an AI tool’s use.
It added: ‘This white paper proposes recommendations which address the impact of decision-support tools on clinicians. The greatest threat to AI uptake in healthcare is the “off” switch, if frontline clinicians refuse to engage with technology they see as burdensome or unfit for purpose.
‘Given competing priorities for funding, political pressures and the need for good governance, it is more vital than ever to focus resources on AI solutions which will generate the most benefit. It is by understanding how AI can genuinely support clinicians that this benefit will most likely be achieved.’
Full seven recommendations from the White Paper:
- AI tools should provide clinicians with information, not recommendations
With the currently product liability regime, the legal weight of an AI recommendation is unclear. By providing information, rather than recommendations, we reduce any potential risk to both clinicians and patients.
- Revise product liability for AI tools before allowing them to make recommendations
There are significant difficulties in applying the current product liability regime to an AI tool. Without reforms there is a risk that clinicians will act as a ‘liability sink’, absorbing all of the liability even where the system is a major cause of the wrong.
- AI companies should provide clinicians with the training and information required to make them comfortable accepting responsibility for an AI tool’s use
Clinicians need to understand the intended purpose of an AI tool, the contexts it was designed and validated to perform in, and the scope and limitations of its training dataset, including potential bias, in order to deliver the best possible care to patients.
- AI tools should not be considered akin to senior colleagues in clinician-machine teams
It should be made explicit in new healthcare AI policy guidance and in guidance from healthcare organisations how clinicians should approach conflicts of opinion with the AI. Clinicians should not always be expected to agree with, or defer to, an AI recommendation in the same way they would for a senior colleague.
- Disclosure should be a matter of well-informed discretion
As the clinician is responsible for patient care, and that disagreement with an AI tool could end up worrying the patient, it should be at the clinician’s discretion, depending on context, whether to disclose to the patient that their decision has been informed by an AI tool.
- AI tools that work for users need to be designed with users
In the safety-critical and fast-moving healthcare sector, engaging clinicians in the design of all aspects of an AI tool – from the interface, to the balance of information provided, to the details of its implementation – can help to ensure that these technologies deliver more benefits than burdens.
- AI tools need to provide an appropriate balance of information to clinician users
By involving clinicians in the design and development of AI decision-support tools can find the ‘goldilocks’ zone of the right levels of information being supplied by the AI tool.
Professor Gozie Offiah, chair of the MPS Foundation, said: ‘Healthcare is undergoing rapid change, driven by advances in technology that could fundamentally impact on healthcare delivery. The potential opportunities provided by AI are only limited by one’s imagination.
‘There are however real challenges and risks that must be addressed. Chief among those is the need for clinicians to remain as informed users of AI, rather than servants of the technology.
‘If AI works well for clinicians, they are more likely to embrace and interact with it, and this will be vital in unlocking benefits to patients. We have written to the regulators and the government minister to urge them to take on board these important recommendations.’
Along with MPS Foundation, the white paper was also a collaboration with the Centre for Assuring Autonomy at the University of York, and the Improvement Academy hosted at the Bradford Institute for Health Research.
It comes after the Government set out a new AI action plan in January to help the public sector spend less time on admin and more on delivering services.
While health secretary Wes Streeting has previously said that shifting from analogue to digital is the priority within this parliament.
In November, research from Google suggested that greater use of AI could provide an extra 3.7 million GP appointments each week within 10 years.