This site is intended for health professionals only

Action ‘urgently needed’ to prevent potential AI harm in healthcare

Action ‘urgently needed’ to prevent potential AI harm in healthcare
By Beth Gault
12 March 2024



Urgent action is needed to prevent potential harm by discrimination through AI in healthcare, an independent inquiry has recommended.

The equity in medical devices review, published yesterday (11 March), looked at ethnic and other biases within medical devices, including those assisted by artificial intelligence.

It found that though AI has benefits, it also has the potential to harm through inherent bias against certain population groups including women, ethnic minorities and those who are disadvantaged socio-economically. The inquiry suggested action was needed at the ‘highest levels’ to anticipate and prevent this harm.

It detailed seven recommendations (see box at end) around developing bias-free AI devices, including establishing a government-appointed taskforce to assess the health equity impact of large language models such as ChatGPT.

The review also called for greater transparency around the data used by device manufacturers, to ensure they are clear about the limitations of the data and how to mitigate or avoid performance biases, and for regulators to have the resources to develop ‘agile and evolving’ guidance.

In response, the government accepted the review’s conclusions and committed to work with partners to improve the transparency of data used in the development of medical devices using AI.

Chair of the review, Professor Dame Margaret Whitehead, said: ‘Nowhere is the need to ensure AI safety and equity more pressing than in medical care, where built-in biases in applications have the potential to harm already disadvantaged patients.

‘Now is the time to seize the opportunity to incorporate action on equity in medical devices into the overarching global strategies on AI safety.’

The report authors added: ‘Few outside the health system may appreciate the extent to which AI has become incorporated into every aspect of healthcare – from prevention and screening to diagnostics and clinical decision-making, such as when to increase intensity of care.

‘Our review reveals how existing biases and discrimination in society can unwittingly be incorporated at every stage of the lifecycle of the devices, and then magnified in algorithm development and machine learning.’

Article continues below this sponsored advert
Cogora InRead Image
Want to gain CPD in dermatology, neurology, or women’s and men’s health? Join us at the next Pulse Virtual event
Advertisement

The review also looked at biases within optical medical devices, such as pulse oximeters, and polygenic risk scores, which provide a measure of disease risk due to your genes.

AI-related recommendations:

The report had 18 recommendations in total. Numbers 8-15 were on the subject of preventing bias in AI-assisted medical devices.

Recommendation 8: AI-enabled device developers, and stakeholders including the NHS organisations that deploy the devices, should engage with diverse groups of patients, patient organisations and the public, and ensure they are supported to contribute to a co-design process for AI-enabled devices that takes account of the goals of equity, fairness and transparency throughout the product’s lifecycle. Engagement frameworks from organisations such as NHS England can help hold developers and healthcare teams to account for ensuring that existing health inequities affecting racial, ethnic and socio-economic subgroups are mitigated in the care pathways in which the devices are used.

Recommendation 9: The government should commission an online and offline academy to improve the understanding among all stakeholders of equity in AI-assisted medical devices. This academy could be established through the appropriate NHS agencies and should develop material for lay and professional stakeholders to promote better ways for developers and users of AI devices to address equity issues.

Recommendation 10: Researchers, developers and those deploying AI devices should ensure they are transparent about the diversity, completeness and accuracy of data through all stages of research and development. This includes the sociodemographic, racial and ethnic characteristics of the people participating in development, validation and monitoring of product performance.

Recommendation 11: Stakeholders across the device lifecycle should work together to ensure that best practice guidance, assurance and governance processes are co-ordinated and followed in support of a clear focus on reducing bias, with end-to-end accountability.

Recommendation 12: UK regulatory bodies should be provided with the long-term resources to develop agile and evolving guidance, including governance and assurance mechanisms, to assist innovators, businesses and data scientists to collaboratively integrate processes in the medical device lifecycle that reduce unfair biases, and their detection, without being cumbersome or blocking progress.

Recommendation 13: The NHS should lead by example, drawing on its equity principles, influence and purchasing power, to influence the deployment of equitable AI-enabled medical devices in the health service.

Recommendation 14: Research commissioners should prioritise diversity and inclusion. The pursuit of equity should be a key driver of investment decisions and project prioritisation. This should incorporate the access of underrepresented groups to research funding and support, and inclusion of underrepresented groups in all stages of research development and appraisal.

Recommendation 15: Regulators should be properly resourced by the government to prepare and plan for the disruption that foundation models and generative AI will bring to medical devices, and the potential impact on equity. A government-appointed expert panel should be convened, made up of clinical, technology and healthcare leaders, patient and public involvement (PPI) representatives, industry, third sector, scientists and researchers who collectively understand the technical details of emerging AI and the context of medical devices, with the aim of assessing and monitoring the potential impact on AI quality and equity of large language and foundation models.

More details about what each recommendation includes can be found in the full report.

Want news like this straight to your inbox?

Related articles