This site is intended for health professionals only

Chapter 1: Introduction and survey

Chapter 1: Introduction and survey
By Victoria Vaughan
14 December 2023



OTHER CHAPTERS

The Rise of the Machines
AI, digital and data in healthcare
Read chapter

about AI, digital and data in healthcare

Chapter 1
Introduction and survey
Read chapter

about Introduction and survey

Chapter 2
The potential and pitfalls of AI in healthcare
Read chapter

about The potential and pitfalls of AI in healthcare

Chapter 3
Navigating the crowded digital products market
Read chapter

about Navigating the crowded digital products market

Chapter 4
Experts from Microsoft, BT and Manchester share AI insights
Read chapter

about Experts from Microsoft, BT and Manchester share AI insights

Chapter 5
Seeking better outcomes through data
Read chapter

about Seeking better outcomes through data

Chapter 6
Barriers and best practice around information governance
Read chapter

about Barriers and best practice around information governance

Bill Gates, co-founder of Microsoft, said the advent of generative AI is as fundamental as the development of the PC, the internet and the mobile phone. The way in which these technologies now shape and impact our lives could not have been conceived forty years ago and similarly the way AI will impact in us the future is an unknown.

The promise of AI is exciting, particularly the newer form, generative AI, which does more than pattern recognition – it actually creates new material from what its learned. The best known example of this is Open AI’s large language model ChatGPT, which had an unprecedented start with a million people signing up in the first five days.

But alarm bells are sounding, particularly around the creation of fake content and in the creative industries, where concerns about AI learning from creative work and replacing writers and artists is a copyright minefield yet to be fully navigated.

In recognition of this the Government convened an AI safety summit at Bletchley Park, Buckinghamshire at the start of November. Government representatives from 27 countries were in attendance and made the Bletchley declaration which states ‘we affirm that, for the good of all, AI should be designed, developed, deployed, and used, in a manner that is safe, in such a way as to be human-centric, trustworthy and responsible’. It cites particular concerns around cybersecurity, biotechnology and disinformation and resolved to focus on identifying risks and where appropriate building international collaboration around policies on AI.

As part of this the big frontier AI firms, Amazon, Anthropic, Google DeepMind, Inflection, Meta, Microsoft and Open AI have outlined their safety policies against nine areas of AI safety [see box].

When it comes to healthcare the possibilities and potential of AI enhancing each aspect of the work of the NHS, from back-office functions and population health management to research and diagnosis, is huge.

There is early success in the area of imaging with cancer detection improved by AI  and work on AI enhanced aspects healthcare, well underway. But there are ethical questions around say  dementia patients being cared for by robots, or the risks around AI drawing conclusions based on potentially biased, poor quality or incorrect data and security of that data.

Data expert Professor Ben Goldacre stated in his review in 2022 that the NHS has ‘73 years of complete NHS patient records contain all the noise from millions of lifetimes. Perfect, subtle signals can be coaxed from this data, and those signals go far beyond mere academic curiosity. They represent deeply buried treasure, that can help prevent suffering and death, around the planet, on a biblical scale. It is our collective duty to make this work.’

He advised the set-up of secure data environments as set places where patient data is held and access is then granted depending on job role. For example where there are local data sharing agreements in place, GPs would be able to de anonymise data for direct patient care [see Chapter 4].

As Healthcare Leader is the only brand for primary care leaders in the system, building on years of catering to clinical commissioners, in October we carried out a survey, answered by 105 primary care health leaders, to find out what they thought and felt about AI and its potential impact on their work. Overall, 53% of respondents indicated that they were positive or very positive about the use of AI in healthcare.

The majority of respondents, 57%, said they looked forward to using AI and 63% saw it as a useful tool in their job role. A significant majority, 79%, do not believe they will be replaced by AI. Although most also admit that they are not using AI in their job at present 54%.

Respondents felt that AI could help in all areas of their work, chiefly in data analysis 70%, summarising documents 62% and scheduling meeting and appointments 58%.

When it comes to the wider impact of AI health leaders felt that it has the potential to improve all areas of care particularly diagnostics (62%), development of new medicines (49%) and administration (47%).

And just over three quarters of respondents (76%) felt that administration would be most improved through use of AI and around half said it would improve patient outcomes (55%) and patient care (50%).

While our respondents are mostly positive about the use of AI in healthcare and in their job roles and feel it will be of benefit in all aspects of their work including patient care and outcomes, there still remains the thorny issue that AI needs to learn from and work with data. And our survey shows high levels of concern around the sharing of patient data with 49% feeling concerned or very concerned.

Attitudes to sharing data with NHS organisations is positive supported by 88% of respondents and 70% are pro sharing data with patients but beyond that it is not something respondents were positive about. Just 27% said yes to sharing data with universities versus 46% who said no. When it comes to private companies, 80% said no and for pharmaceutical companies 74% said no to sharing GP data and there’s still a majority of 54% said it should not be shared.

Of course, data is shared as respondents highlighted, with patients 61% and 63% other NHS organisations, 81.1% of patients now have access to their records and in Greater Manchester [see chapter 4] they have recently been permitted to link primary and secondary care data together. But it seems in the future this will go further via the secure data environments as one of the ways the NHS is filling the data analyst recruitment gap is by working with university academics who are attracted to the opportunity to work with live and important data – albeit anonymised – for research purposes.

While attitudes towards the impact and potential of AI are positive they are counter-balanced against concerns around sharing data and essentially trust. AI is seen as tool to enhance the work of humans, rather than a threat to job roles but its only as good as what it has got to work with. There must be trust around the safety and security of sharing data and the way in which AI is used to interpret that data.

This report looks in depth at the potential and pitfalls of AI, the abundance of products, views of key leaders in the industry, issues around and data use and governance. It draws together the latest thinking and future developments to help primary care leaders navigate future developments in AI.  

Nine areas of AI Safety

  • Responsible Capability Scaling provides a framework for managing risk as organisations scale the capability of frontier AI systems, enabling companies to prepare for potential future, more dangerous AI risks before they occur
  • Model Evaluations and Red Teaming can help assess the risks AI models pose and inform better decisions about training, securing, and deploying them
  • Model Reporting and Information Sharing increases government visibility into frontier AI development and deployment and enables users to make well-informed choices about whether and how to use AI systems
  • Security Controls Including Securing Model Weights are key underpinnings for the safety of an AI system
  • Reporting Structure for Vulnerabilities enables outsiders to identify safety and security issues in an AI system
  • Identifiers of AI-generated Material provide additional information about whether content has been AI generated or modified, helping to prevent the creation and distribution of deceptive AI-generated content
  • Prioritising Research on Risks Posed by AI will help identify and address the emerging risks posed by frontier AI
  • Preventing and Monitoring Model Misuse is important as, once deployed, AI systems can be intentionally misused for harmful outcomes
  • Data Input Controls and Audits can help identify and remove training data likely to increase the dangerous capabilities their frontier AI systems possess, and the risks they pose

Source: AI Safety Summit 

Want news like this straight to your inbox?

Related articles