Ofsted-style ratings are heading to CCGs. But what will the Department of Health and NHS England need to look at as it is being developed?
Ofsted-style ratings are heading to CCGs. But what will the Department of Health and NHS England need to look at as it is being developed?
At the end of October last year, Jeremy Hunt, the secretary of state for health, announced the government’s plans for Ofsted-style ratings to be introduced for clinical commissioning groups (CCGs), to help fill what he described as ‘the transparency gap’ in the National Health Service. He said the ratings would help the public understand the quality of their local health services and how this compares with other places. NHS England is now responsible for working out how this should be done, with the new measures to be in place by April this year and the first ratings to be produced in the summer.
But are more ratings really what the NHS needs? Here are five important questions about the plans, drawing on the findings of our recent review for the Department of Health on measuring the performance of local health systems – questions whose answers will have a big impact on the NHS in future.
Question 1
Will the focus of the ratings be CCGs or local health systems?
This might sound like quibbling over semantics, but it’s worth being clear that CCGs and local health systems aren’t the same thing. CCGs are NHS organisations that commission some (but not all) health services in their local area. They are just one part of a broader health system that includes all organisations commissioning and providing health and care services for the local population – whether that’s in Wigan, Wandsworth or another part of England.
This distinction has important implications for how performance is assessed. Assessing CCGs would mean using indicators that can be attributed to their performance as organisations, while assessing local health systems requires those doing the measuring to use a much wider lens – for example, by looking at how NHS services work with social care and public health services and assessing their collective impact for the population served.
Our recent review for the Department of Health – which was commissioned by the government to help inform their plans – focused on the performance of local health systems rather than CCGs. Taking this broader approach has the potential to encourage commissioners and providers to work together to improve care for the population they collectively serve. And is the public really interested in the workings of CCGs, per se? Or are they more interested in how the whole system – including CCGs – works together?
Question 2
What is the aim of providing CCG ratings?
Measuring quality in health services is not a simple task. Part of the challenge involves choosing what to measure and how to report the results and doing both requires clarity about why performance is being measured in the first place.
The government’s plans point towards a number of different aims for CCG ratings: providing information to the public on the performance of local services, and offering an Ofsted-style judgement on the performance of CCGs (on a scale from ‘inadequate’ to ‘outstanding’, aka the Care Quality Commission’s (CQC) provider ratings), presumably for the purposes of performance management. Hunt has also talked about transparent reporting of data intelligent transparency as a way to support improvements in care in the NHS.
The problem is that these aims require different approaches. Compare judgement and improvement, for instance. Judgements require an assessment based on indicators, which are unambiguous and attributable markers of performance. However, indicators to support staff to make improvements in care can be less robust and leave far more room for interpretation. The right approach to developing CCG ratings – the indicators chosen, how they are analysed, the format used to present the results, and so on – will differ depending the aim.
Question 3
How will the aggregate ratings be constructed?
Aggregate ratings – that is summary ratings combining a range of different measures – are not new in the NHS, and were first introduced for providers and commissioners back in 2000/01. Evidence of their impact since then is mixed, at best.
While aggregate ratings can sometimes help improve quality in the areas they cover, they can also have perverse effects for NHS patients and staff. In order to meet targets covered by ratings in the past, organisations have manipulated data, taken actions that are not in the best interest of patients, and paid less attention to areas not covered by the ratings. NHS ratings have also distorted local priorities and damaged organisational culture, staff morale and recruitment.
These, alongside a range of other conceptual, technical and behavioural issues, led us to advise against the use of aggregate scores based on performance metrics in our review. Put simply, they risk hiding far more than they reveal. Other ways of providing a summary of performance are available, learning lessons from approaches that have been taken in other countries.
We said that if the Department of Health wants to produce aggregate scores for CCG performance, it should at least draw on a wider range of ‘softer’ intelligence relating to leadership, culture and other factors to provide a more rounded assessment (as NHS England’s existing CCG Assurance Framework attempts to do). The suggestion of ‘expert committees’ interpreting performance data may not be quite the same thing.
Question 4
How does this fit with everything else that is already being used or developed to assess the performance of local health services?
The way that performance is reported and assessed in the NHS in England is already complicated and confusing, involving a number of different organisations and performance frameworks. The danger is that a new way of measuring performance is being layered on top of what we already have, rather than complementing or consolidating what exists.
How will the new approach fit with the existing outcomes frameworks and related indicator sets? How will it fit with the CQC’s plans for place-based provider ratings (where the performance of services in a given area are assessed)? How is a CCG supposed to know what their real priorities are? And will these be consistent with what other parts of the system are being measured on? Above all, there needs to be radical simplification and alignment of the way that performance is assessed in the NHS in England – not more confusion.
Question 5
Do we really know what kind of information the public wants about local health services? (And will we ask?)
If one of the main aims of providing CCG ratings is to provide information for the public, then asking people about the kind of information they’d like about their local health services will be essential. This is particularly true in the case of a CCG or local health system – things that people have no real choice over (unless they want to move house).
This means asking the public about the areas of care and the kind of indicators that matter to them. The government confirmed that ratings in six clinical areas – cancer, dementia, diabetes, mental health, maternity and learning disabilities – will be produced alongside overall CCG ratings under the new system. The challenge in doing this is that not all parts of the population are covered by the clinical areas chosen, and that some clinical areas are prioritised over others. Making these kind of choices would be helped by seeing what the public think.
All in the approach
These are just some of the questions that will need to be considered by the Department of Health and NHS England as the plans for CCG ratings are developed over the coming months. They are also questions that CCGs are likely to be asking as they hear about the plans too. At The King’s Fund we support Hunt’s aim of ‘intelligent transparency’ in the NHS to support improvements in care. However, it will be important to ensure that the approach taken to achieving this is just as intelligent as the language used to describe it.
Hugh Alderwick, senior policy assistant to the chief executive officer at The King’s Fund.