This site is intended for health professionals only

A decade of QOF

A decade of QOF
13 November 2013



With the quality and outcomes framework approaching its tenth birthday, we look back at the evidence and impact of the scheme
April 2014 will mark ten years since the introduction of the quality and outcomes framework (QOF) as part of the new general medical services (GMS) contract. While sales of bunting are unlikely to soar around this anniversary, the QOF has left an indelible mark on general practice and health care in the UK. 

With the quality and outcomes framework approaching its tenth birthday, we look back at the evidence and impact of the scheme
April 2014 will mark ten years since the introduction of the quality and outcomes framework (QOF) as part of the new general medical services (GMS) contract. While sales of bunting are unlikely to soar around this anniversary, the QOF has left an indelible mark on general practice and health care in the UK. 
It was one of the first, and is still one of the largest, physician incentive schemes in the world. The proportion of income dependent on QOF achievement is considerably larger than in comparable systems, although NHS general practice contracting is also practically unique. As we approach a decade it is worth considering what has been learnt about financial incentives for primary care during that time. QOF has changed considerably in these years and will continue to do so, but management by financial incentives, whether at the national or CCG level, seems set to continue. It is important that when new schemes are devised the evidence collected since 2004 is used as the basis of decision making.
When the QOF was first proposed the use of computer systems in consultations was far from universal. The original specifications included instructions on conducting an audit of paper notes to calculate achievement. The QOF effectively made computer use universal within a couple of years, accelerating the uptake and breadth of electronic health records.
That, of course, was not the primary objective of the framework. The aim was to incentivise the improvement of quality in general practice. Prior to 2004 there was no real measure of the quality of care that was being delivered by GPs. There were already some payments for activity – the payments for vaccinations continue in much the same way and cervical cytology had only minor changes. The scope of these was much narrower than for the QOF payments.
It is difficult to say how much effect the targets have had on patients outcomes. The QOF has always meant to be evidence based. The adherence to this aim has varied over the years with political fashions. Review of indicators by the National Institute of Health and Care Excellence (NICE) has improved the public evidence base in recent years but other indicators, such as the quality and productivity area, have compromised this principal.
Part of the problem in assessing the clinical effect has been the fact that, even where an intervention has a strong evidence base, there is very little evidence base for the use of incentives in promoting these interventions. How could we be sure that this incentive scheme was delivering better services to patients?
Several studies have looked at this very question. As QOF was not trialed before its introduction there has been no control group to allow a comparison of the effects incentives against all of the other factors that can affect diagnosis and treatment. Guidelines change, new treatments become available and awareness of particular diseases varies over time. The years of the QOF have seen the introduction of public smoking bans throughout the UK. All of these make the statistics difficult to interpret.
While randomised controlled trials might not have been possible there have been a number of observational studies looking at trends in diagnosis and treatment. The best of these studies had data from both before and after the introduction of the QOF. 
Although there was clear improvement in the diagnosis rates and the treatment of several chronic conditions this was mostly a continuation of trends that had existed before the QOF. There was some evidence of a ‘bump’ in the first year but less evidence of a long-term effect.1 It is not possible to say how long these trends would have continued without incentives but this is scarcely a ringing endorsement of the QOF. There is a natural limit to how high achievement will be able to go. Most obviously it can be no higher than 100% of patients, at which point the rate of improvement is likely to drop to zero. In most real world cases the maximum may be a bit lower, but either way a steady pace of improvement is unlikely. We may simply see the natural evolution of treatment of disease.
Fairly swiftly the New England Journal of Medicine published a paper looking achievement in the first year of the QOF.2 There was some association seen between lower achieving practices and a patient population with more patients in a lower income or single parent household. Older GPs tended to do less well. Smaller practices tended to do better, as did doctors who received their medical training in the UK. These effects were small, however, and most of the variation between practices could not be explained in these terms.
A later study published in The Lancet in 2008 looked specifically at the QOF achievement of practices in areas of deprivation.3 By this time there were three years of data to look at, although the QOF had also grown and the lower thresholds had been increased by year three. They saw that the gap in achievement between practices with the greatest and least deprivation had steadily decreased over the first three years of the QOF. Achievement had increased in all practices but the rate of increase was higher in practices with a more deprived population.
It is also true that the QOF only looks at a fairly small sample of what practices actually do. What is not clear from looking at these studies is whether the there is an overall increase in quality or if this is restricted to those actions that have been specifically incentivised.
This latter point has been addressed more recently. It is difficult to quantify the effect on areas which have not been included in the QOF as their exclusion is often due to being more difficult to objectively measure. In research published in the New England Journal of Medicine new indicators were devised and retrospectively applied to data from GP surgeries.4 The results generally showed a slower pace of improvement in these indicators that were not incentivised, and indeed that the practices did not even know about, than in incentivised areas.
This cuts to the heart of what we expect a physician incentive scheme to do, and this has been less than clear in the QOF from the start. While there was a desire to reward quality, there has always been a clear mandate to incentivise improvement in quality. It would surely be a disappointing incentive scheme that did not produce better results than in those areas in which it did not operate. 
It is reasonable to expect practices to concentrate on achieving the incentivised indicators and ensuring that they are coded correctly.
GPs’ reaction to the details of the mechanisms of QOF was also the subject of study, particularly around the area of exception reporting. The health economists from York5 looked in detail at exception reporting in QOF. Their observation was that practices were more likely to exception report patients when they needed to do so to hit the targets. They interpreted this as cheating by practices although personally it seems more likely to me that practices did not bother to exception report where there was no incentive to do so.
The same study also acknowledged that there was quite a bit of achievement above the thresholds. This activity towards QOF indicators without payment was described as altruistic, although I would prefer to describe that behaviour as professional. GPs do what is best for their patients. They also tend to ignore what seems irrelevant.
In general the greatest achievement levels overall have been in indicators where there has been widespread agreement that the intervention is useful. Where indicators seemed purely for the purpose of enumerating some aspect of care, such as the PHQ9 depression assessment or the recent General Practice Physical Activity Questionnaire, achievement levels have be rather lower.
The indicators themselves have also been a victim of circumstance at times. Generating new areas for consideration has tended to be a convoluted process, taking some years for indicators to be produced from evidence. It has happened more than once that indicators have been superseded by the time that they are implemented. In the very first years of QOF, airways reversibility testing for patients diagnosed with chronic obstructive pulmonary disease (COPD) suffered exactly this fate. Since then it has been shown that the thresholds for blood glucose control in diabetes may have done more harm than good and were later altered upwards.
However there have also been concerns about the effect of incentive schemes on the mechanisms of consultations. It was felt that the agenda in consultations had become more doctor-centric as the GP or nurse responded to the prompts from the computer rather than cues from the patient. Practice computer systems become efficient at delivering these prompts and they appeared on the screen with a prominence at least as high as past history or allergies. 
This concentration on the medical model and the imposition of an external agenda into the consultation has been a source of concern since the introduction of QOF. An editorial in the British Journal of General Practice in 2007 asked “What have you done to yourselves?”6 presenting the QOF indicators as reducing professionalism, encouraging data collection and changing the status of GPs to that of medical technicians.
In 2013, a essay in the BMJ again complained about the intrusive nature of data collection declared that “Patients are not (only) data fields for the doctor to harvest, objects to be imaged, or problems to be solved.”7 
Actually I have taken that last quote out of context because it was not talking about the QOF at all. The author was a GP in the United States and was seeing much the same sort of consultation agenda setting by external targets. While the QOF may have been one of the first there have since been a profusion of such schemes internationally. 
With all of this activity there have been numerous studies of all manner of systems. When enough studies have been carried out a systematic review is possible and this is exactly what the Cochrane Collaboration has done.8 As the QOF was never designed to have a control group it was not included in any of the papers considered in the review. Disappointingly they found no evidence of any benefit to patients from the physician incentive schemes studied. 
However the studies were generally considered to be of low quality and the absence of evidence of benefit could not be considered evidence of an absence of benefit. More research, to use a cliché, is needed. 
Payment by performance is not going to disappear from healthcare in the NHS. While the national schemes are at least as much about politics as evidence there are a large number of local enhanced services produced by clinical commissioning groups (CCGs). 
Setting targets is hard. A checklist in the BMJ was based on evidence from QOF.9 This list is a very good guide to producing useful indicators. Even harder is designing a system into which they can be incorporated. It is no small task to produce a scheme that will do more good than harm.
With a decade of research and experience it is as important for CCGs to consider the evidence for the scheme which they are developing as the clinical evidence for the intervention which they are promoting. 
 
References
 1. Vaghela P, et al. Population Intermediate Outcomes of Diabetes Under Pay-for-Performance Incentives in England From 2004 to 2008. Diabetes Care. 2009;32(3):427–9. 
 2. Doran T, et al. Pay-for-Performance Programs in Family Practices in the United Kingdom. N Engl J Med. 2006;355(4):375–84. 
 3. Doran T, et al. Effect of financial incentives on inequalities in the delivery of primary clinical care in England: analysis of clinical activity indicators for the quality and outcomes framework. The Lancet. 30;372(9640):728–36. 
 4. Doran T, et al. Effect of financial incentives on incentivised and non-incentivised clinical activities: longitudinal analysis of data from the UK Quality and Outcomes Framework. BMJ 2011;342. 
 5. Gravelle H, Sutton M, Ma A. Doctor Behaviour Under a Pay for Performance Contract: Further Evidence from the Quality and Outcomes Framework [Internet]. Centre for Health Economics, University of York; 2008. 
 6. Mangin D, Toop L. The Quality and Outcomes Framework: what have you done to yourselves? Br J Gen Pract. 2007;1;57(539):435–7. 
 7. Loxterkamp D. Humanism in the time of metrics–an essay by David Loxterkamp. BMJ. 2013 Sep 19;347(sep19 1):f5539–f5539. 
 8. Scott A, et al. The effect of financial incentives on the quality of health care provided by primary care physicians. Cochrane Database Syst Rev [Internet]. John Wiley & Sons, Ltd; 1996.
 9. Glasziou PP, et al. When financial incentives do more good than harm: a checklist. BMJ. 2012;14;345(aug13 2):e5047–e5047. 

Want news like this straight to your inbox?

Related articles