This site is intended for health professionals only

QOF: Comparing performance

QOF: Comparing performance
8 November 2011



The data required for payment in QOF is extracted directly from practices each year on the first of April. It is combined with data entered onto the payments calculating systems – QMAS in Scotland and England – and passed to local health authorities for verification and authorisation. Finally, national authorities in each of the four countries process the data and publish it. Scotland, Wales and Northern Ireland published at the end of September while England published at the end of October. Normally, there is little change from the data that is taken from the practices at the start of April.
 
While none of this is the final word on quality in a practice it is helpful, however, to be able to compare performance with that of other practices. Being able to identify strengths and weakness can be the first step towards making plans for improvement.
 
The data can be split into three areas: prevalence, clinical achievement and organisational achievement.
 
Prevalence has added importance this year as the five percent rule has been removed and, for the first time, prevalence is now directly linked to payment in all indicators. Effectively this has turned the indicators into a modified form of item of service payments. Areas that solely have a register, such as learning disabilities and obesity, now have a payment for each patient added to the register.
 
In previous years prevalence had little effect on payment in several indicators – especially those dealing with mental health. For many practices this change will have had a large effect on payment at the end of the year. Practices with below average prevalence could have quite a large reduction in income. For all practices the incentive to identify patients to add to registers is now about the same. 
 
It might be expected that this would have an impact on the prevalence figures. While practices with low prevalence may experience a loss that they will be keen to make the gains for, high prevalence practices are also higher. However, in England at least, we are not seeing any real change.
 
In some disease areas an increase in prevalence each year is normal and to be expected. The cancer register, for instance, is based on all new diagnoses since April 2004. As new patients are diagnosed each year, prevalence rises in a roughly linear way. There is a similar effect in the register used for depression questionnaires. While in theory this register is based on patients who have ever had depression, it continues to rise in a linear fashion suggesting that practices are coding only new diagnoses rather than historical ones.
 
Some areas do show a change. There continues to be an increase in the diagnosis of diabetes. Throughout the UK this has risen by over a quarter since the introduction of QOF and the rate is not slowing. Diagnosis of chronic kidney disease has also risen by almost half since its introduction in 2006 although this year the rate of increase is a little lower. It seems likely that much of this increase in both cases is due to improved diagnosis and coding although other studies have suggested a smaller genuine increase in pathology. These are long term changes and there is little to suggest a genuine reaction to the changes in payment calculation. It may possibly take another year to emerge once practices see the actual change in their income.
 
There is another aspect to these payments. As they are related to the average prevalence levels there is effectively a form of pay pool for each clinical area. As the average prevalence goes up then the payment per patient reduces. This is a form of "The Tragedy of the Commons" – each practice individually will benefit from the increase in its prevalence levels but things will simultaneously get a little tougher for everyone. For instance, as diabetes levels have risen the payment per patient has fallen by around twenty percent over the past seven years.
 
This is complicated further by the fact that the average is calculated separately for each of the four countries of the United Kingdom. England tends to have the lowest prevalence, and of course has the greater majority of the population. Prevalence for most areas is about ten to twenty percent higher in Wales. Scotland is more variable although has a high prevalence of coronary heart disease, as might be expected. For the reasons described above a patient in Wales with CHD attracts a payment of only 85% of that of an English patient. A patient in Scotland would carry only 77% of the value of a similar English patient. These figures are based on this year's data. The issue has been there since the advent of QOF but has become more striking with the changes to prevalence payments.
 
There were some peculiarities in the data from 2010 with very high prevalences in fast growing practices, particularly the "Darzi clinics". Some of these had disease registers which were larger than their entire registered population.  The reason for this was that the practice population is measured on the first of January while the registers are calculated on the first of April. For a very fast growing practice the latter can be larger than the former. This does not seem to have been an issue this year – and the removal of the five percent rule means that it would no longer cause problems for other practices.
 
Achievement is also, of course, a vital part of QOF and pleasingly it seems that in each of the four countries the average number of points gained has increased over the last year by ten per practice. For a typical practice this is worth almost thirteen hundred pounds. As none of the indicators have changed this is a genuine like-for-like comparison, which has not been possible for a number of years.
 
On average practices gained just under 950 points. There was some variation around the country with London doing particularly badly with an average of 933 points and the South and South West doing rather above average with 958 and 965 average points respectively. London tends to be different to the rest of the country, in most respects, and had the lowest number of points gained in clinical, organisational and additional domains. The three countries outside England all have points averages towards the top of the range.
 
The patient survey was the area with the lowest proportion of points earned with an average of around three quarters of points gained. Again this was lowest in London with only sixty eight percent of points gained. The patient survey has subsequently been abolished and it is to be hoped that the new system works better.
 
Clinical achievement has been universally high with an average of only a couple of percent less than 100.  This level was pretty consistent across all clinical areas with the notable exception of depression where the achievements fell below ninety percent. The two areas that were particularly difficult for practices were the screening of patients with chronic disease for depression and the second depression assessment questionnaire. It may be worth concentrating on these areas this year as they are to continue next year.
 
Although palliative care has a small number of points its level of achievement was also a little lower than other areas. This is likely due to the small number of patients involved. Some practices may have had no patients requiring palliative care on the first of April and would have lost all of the points in this area.
 
Records 21 – the recording of ethnicity on all new patients is a point that is gained by fewer practices than most. It does not have a minimum threshold and all patients must have this status recorded. It just takes one patient to miss this target. There is only one point here and unless you are very confident of getting to 100% it may simply not be worth the risk of attempting to achieve this.
 
Other areas that have proved challenging include the recording of smoking status in all patients over the age of 18. Although in the records domain this is paid on a sliding scale like one of the clinical indicators with an upper threshold of ninety percent. Practices in all countries averaged around eighty five percent although there was a good deal of variation. This is an area that will become more difficult next year with smoking cessation advice being also required.
 
Being aware of performance in QOF relative to other practices is both useful for professional development and potentially lucrative. QOF remains optional and a crude comparison of points can be unhelpful but practices should know the reasons why their data varies from their neighbours.
 
Dr Gavin Jamie
Swindon GP
Webmaster of the
QOF Database

Want news like this straight to your inbox?

Related articles