Artificial intelligence (AI) is often touted as being the ‘silver bullet’ that could save healthcare, freeing up time for doctors with faster diagnosis, risk prediction, analysing genomic information and speeding up drug development.
A myriad of AI technologies are already in use in the NHS with a long list being trialled (and trained) by clinicians. A recent example is NHS England-funded pilots of an AI waiting list tool to help identify the most at-risk patients and prepare them for surgery. There are plans to now roll out across Cheshire and Merseyside after a contract worth several million pounds was signed.
In August, the National Institute for Health and Clinical Excellence (NICE) recommended nine AI technologies to be adopted by NHS hospitals to save time in planning for radiotherapy. In Sheffield, clinicians have developed technology to speed up assessment of the heart’s function, spotting damage within seconds and removing the manual processes required in analysing MRI scans.
AI in imaging
Medical image analysis is certainly one area where there has been a lot of work and scrutiny, says Professor Christopher Yau, Professor of Artificial Intelligence based at the Big Data Institute in Oxford. In the US, 523 medical devices making use of AI have been approved and 321 of those are in radiology.
‘It’s become a serious debate because it’s been subject to some more rigorous larger studies and getting to the point where you might think it could seriously challenge experienced radiologists with accuracy,’ he explains.
There has been much more serious debate about its use in frontline care in that field, he notes. In other areas, such as molecular medicine and genomics, it’s very routine to use algorithmic techniques and no one bats an eye. It has also been fairly well accepted in the cancer world for data analysis, Professor Yau notes.
Algorithms developed by Babylon Health and Ada have been used for some time in helping to triage patients. Attendees at this year’s European Emergency Medicine Congress heard that ChatGPT performed as well as a trained doctor in suggesting likely diagnoses for patients being assessed in A&E departments.
‘They are useful but part of that performance is not necessarily down to the intelligence of the technology but just how repetitive and monotonous human beings are and statistically its quite easy to predict for the majority, it’s in the minority of complex cases where whether it’s human or non-human triage it’s difficult,’ he adds.
The use of AI in healthcare is certainly something the government and NHS is keen to develop and adopt having already invested billions over the past decade. In a White Paper published in August, ministers set out their vision for a ‘pro-innovation’ approach to AI regulation with a goal for the UK to become and ‘AI superpower’.
Around the same time, UK Research and Innovation announced £13 million of funding for 22 AI health research projects across imaging, surgery, tumour classification, chronic disease prediction, genomic profiling and personalised medicine.
Professor Yau is one of the beneficiaries working on a project to create the foundation underpinning clinical risk prediction models for multiple long-term conditions – the chassis the car is built on he explains – that could make it simpler for tools for different diseases to be created and gain regulatory approval.
Despite the huge amount of interest, time and investment in AI in healthcare, we need to understand that we are still in the very early infancy of its use, he explains. ‘When we look back 20 years from now, the systems we’re working with will look very different. We’re still at the stage where we are innovating around specific tools but for AI to be really transformative, we’re going to need to rethink the whole way we develop clinical care and integrate information and data all through the clinical pathway.’
Alongside the potential of AI are a host of ethical, safety, and societal concerns that we need to grapple with. A UK Frontier AI Taskforce has been set up, backed with £100 million of funding, to advise the government on risks and opportunities. Members include former Academy of Medical Royal Colleges Chair Dame Helen Stokes-Lampard. In November the UK is hosted the first AI Safety Summit at Bletchley Park to build international consensus on the safe use of AI.
The NHS has the AI and digital regulations service – a cross regulatory advisory body which set out guidance for those developing and adopting these new technologies. Zoher Kapacee, head of data and AI policy at the Health Research Authority says the current regulatory considerations do apply to AI but there are nuances as well as hurdles that need to be addressed.
There is significant interest from developers and potential applications of AI but also limitations in the adoption of the technology. ‘Challenges include access to high quality data, research regulation, clinical trust and clinical appetite. I speak to physicians who said I would love to use this but we just don’t have time.’
But Kapacee does predict an exponential rise, particularly in the easy-to-use technologies, as clinicians see the benefits. ‘The biggest issue is often questions around what is the right AI intervention to deploy that doesn’t suffer from obsolescence or some new technology coming along.’
‘But if we can find an acceptable safety threshold, at least for patient care, then I can see an explosion of the use of AI in healthcare.’
Experts are already warning about generative AI churning out misinformation with programmes like ChatGPT prone to making things up. This is one reason why you will always need an AI bot to assist rather than replace a healthcare professional, says Jim Kean, CEO of Molecular You. What is known as the ‘human in the loop’ scenario. He notes that the concept of ‘poisoned AI’ underpins this issue where use of unreliable data – perhaps also generated by bots – lead to inaccurate predictions or responses.
He adds people will also always want a connection to their healthcare professional, they don’t necessarily want to talk to a chatbot about their problems. ‘The trick is how can you free up highly trained professionals so they can deploy their education and human empathy and be freed up from regulatory, compliance or keeping up with the latest research and AI just becomes part of their helper network.’
Another issue that is a ‘really neglected part of the conversation’ currently is knowing the source for information provided by an AI tool, he adds. ‘Is this 100% correct or 80% because in healthcare, 80% could be fatal,’ he says.
As a principle, it can be said that the rate of development of AI is currently moving faster than our ability to understand and use that technology in the real world in a safe way, says Liz Ashall-Payne, chief executive of ORCHA, an organisation that reviews digital health apps.
‘We have to think safety first. One example is mole checking. We know that we have hundreds of thousands of people on a waiting list but 50% of those will not have cancer, we just don’t know which 50%. There are around 47 mole checking apps on the open market but of those, three work, so that’s the problem.’
She adds that AI is also a term that is used really loosely. ‘There’s different risks within AI and something that is changing every time somebody uses it is higher risk than a Chatbot that just has the same responses in an algorithm. ChatGPT is great from an admin point of view but don’t use it in healthcare because it generates things that are made up.’
We have to get better at recognising what it is and recognising where the healthcare system is today and what healthcare systems and clinicians are willing to embed and use, she explains. ‘I would also say that AI could be part of the answer but it will never be the whole answer.’
A report from Asda Online Doctor surveyed 2,000 British adults to understand how they already interact with AI and healthcare and what conditions they would be reluctant to talk to a Chatbot about and found 22% are already using the technology in their everyday lives. In addition, 1 in 3 people have taken Google’s advice on medical issues and 12% of Gen Z (ages 18-26) and 11% of Millennials (ages 27-42) have taken medical advice from AI platforms before, it reported.
Of course, AI is only as good as the information it is based on. The reason there has been so much AI development in radiology and imaging is because that’s where the best, most consistent, high-quality data exists, explains Pritesh Mistry, fellow in digital technologies at The King’s Fund. But those algorithms are still quite narrow in focus.
‘One of the potentials about AI is that it continues to adapt and get better but that raises a whole other question about how you monitor it and there’s still lots of issues around the adoption of even the most simple AI tools on how much you trust what the system is doing,’ he says.
With quite a bit of focus currently on time saving AI tools to take pressure off an overburdened workforce, there will be continued pressure to build this automation and that’s when you may lose sight of what the technology is doing, he explains.
It will not be able to magically solve all the problems in the NHS, he notes. ‘You will still need capacity in the system, the skills, the experience, and then you will need to bring people on board as well. You have to consider the environment it sits in.’
And staff may not be completely on board if they’re looking at constant press releases about how AI will revolutionise healthcare but they are working at an old computer that requires multiple log ins and takes half an hour to turn on, he points out. ‘You also have some hospitals planning robotic assisted surgery and patients using wearable devices whereas others still trying to improve electronic health records.
‘You start to see this gap and we need to think about how we help those to get quicker and better with the technology rather than that gap widening because then it’s not available for everyone.’
In a recent blog he argues that one overlooked issue is how AI has the potential to not just change how clinicians work but help support people to take ownership of and manage their own health and social care and have more control. ‘In doing that you shift what is necessary for the system to do and you can maybe rebalance things a bit better.’
At the Ada Lovelace Institute, they have been doing various pieces of work looking at how the adoption of AI could impact health inequalities. Senior researcher Mavis Machirori says it will be crucial to make the right decisions about the data being used to underpin AI technologies to avoid making health inequalities worse.
‘We need to think about where is the data coming from and who has been involved in the collection.’ With increased use of technology within the NHS, decisions are being made about bringing systems together that were originally designed to do different things which can also be problematic, she adds.
Fellow senior researcher Anna Studman said health inequalities are often not part of the conversation when thinking about AI. ‘We need to understand whose responsibility it is to look out for health inequalities in this digitalisation drive and at the moment it feels like no one really knows.’
As part of that we don’t have good metrics measuring the impact of digital health and we don’t know if it’s going to exacerbate inequalities, she adds. ‘People do say that they feel digitally excluded and they’re really worried the loss of the human touch in healthcare. For some being able to communicate through an app could aid accessibility but we need to think about different groups,’ says Studman.
‘For those starting to embed these technologies we also need think carefully about the metrics, about what does success look like,’ adds Machirori. She wants to see real consultation and engagement with communities as the technology is rolled out and identifying the problem you’re trying to solve. ‘You need local insight and something that’s more human centred rather than we have this shiny new kit and we have already deployed it.’