This site is intended for health professionals only

Doctors using AI and seeing benefits, report finds

Doctors using AI and seeing benefits, report finds
By Anna Colivicchi
6 February 2025



One in four doctors currently use artificial intelligence (AI) and see its benefits for efficiency and patient care, new research commissioned by the GMC has found.

Researchers commissioned by the regulator looked into the types of AI doctors are using, how well they understand its risks and what they do if they disagree with the output of an AI system.

A survey of 1,000 doctors – including 173 GPs – found that AI is ‘embedded’ in the working life of a sizeable portion of doctors, with 29% reporting that they had made use of at least one AI system in their work in the previous year.

Of those that had reported any AI use, more than half (56%) said that they use it at least once per week, and 73% reported using it at least once per month. However, 15% felt the technology was making them ‘worried about their job security’.

The Alan Turing Institute, who analysed the results of the survey, pointed out that the majority of doctors are not making any use of AI systems in their work, meaning that there are ‘significant areas’ where the ‘potential’ of the technology ‘is not being explored’.

A total of 17 doctors, including seven GPs, who had said they used AI in the past 12 months discussed the benefits and risks of using the technology with researchers from agency Community Research.  

They found that there was ‘significant interest’ in developing AI systems to help manage back-office functions and free up resources for patient care, but that doctors were also aware of risks of over reliance on the tools.

One GP said: ‘There’s an element of risk, so you’re still in a risky profession, you still know that you could make a decision that alters the life or death situation for a patient.

‘You might look at the advice and think: “No, I don’t agree with that,” but that’s fine; at least it’s asked the question and it’s then part of the checklist in your own thought processes.’

GPs told the researchers that AI tools and algorithms are currently used in general practice and integrated into triage and patient management tools to:

  • Help prioritise which patients to see
  • Suggest diagnostic tests that should be organised
  • Optimise care pathways (e.g. reduce the number of appointments needed)
  • Flag risks and potential diagnoses
  • Highlight possible prescribing issues or conflicts

Those using these systems in primary care said that, while the system may be available to all doctors within a GP practice, it remains ‘up to the individual practitioner’ how far they pay attention to AI outputs.

One GP told the researchers: ‘It’s really dependent on the clinician if they choose to use it or not and I guess that’s the bit where every individual has their own variances or thresholds, as to how willing they are to use some of the information that is there.’

The doctors interviewed also said that the technologies ‘presented risks’, the researchers added, as they saw potential for AI-generated answers to be based on data that could itself ‘be false or biased’. They also acknowledged ‘possible confidentiality risks’ in sharing patient data.

One GP trainee said: ‘I did wonder what the data I was feeding it was going to be used for, long-term, because these LLMs [Large Language Models] build themselves on data that they’ve been given in the past.

‘So before I started using it on a regular basis, I had a chat with my supervisors and trainers in the surgery, to have a discussion about privacy risks, which is why I essentially do not feed in patient information, as in name, date of birth, address, details like that, because I believe that’s where the danger comes from.’

The researchers said that this study showed that doctors believe AI is an ‘assistive tool’ to their practice, and that they recognise that they have responsibility for all decisions informed by AI.

The study said: ‘They appear confident in overriding decisions made by AI and some doctors are overriding AI recommendations frequently.

‘This very much supports findings from the Turing survey where only 1% of doctors said they would follow the AI judgment when asked what they would do if they disagreed with the recommendation of an AI system.’

GMC director of strategy and policy Shaun Gallagher said: ‘It’s clear that AI’s use in healthcare will continue to grow and projects like these give valuable insights into how doctors are using these systems day-to-day.

‘These views are helpful for us as a regulator, but also for wider healthcare organisations, in anticipating how we can best support the safe and efficient adoption of these technologies now, and into the future.’

Last year, a survey found that one fifth of GPs are already using artificial intelligence in clinical practice, with ChatGPT being the most popular tool.

AI is also being used to help diagnose cancer earlier, with a new trial launched this week to detect breast cancer.

The Government has also set out a new AI action plan this year, to try and help the public sector spend less time on admin and more on delivering services. While health secretary Wes Streeting has previously said that shifting from analogue to digital is the priority within this parliament.

In November, research from Google suggested that greater use of AI could provide an extra 3.7 million GP appointments each week within 10 years.

A version of this story was first published on our sister title Pulse.

Want news like this straight to your inbox?

Related articles