Dr Gavin Jamie
Swindon GP
Webmaster of the QOF Database
The new quality practices indicators of the Quality and Outcomes Framework (QOF), with their attached 96.5 points were the biggest surprise from the changes for 2011-12. Even the National Institute for Health and Clinical Execllence (NICE) committee responsible for devising new indicators were unaware of them until their publication.
Dr Gavin Jamie
Swindon GP
Webmaster of the QOF Database
The new quality practices indicators of the Quality and Outcomes Framework (QOF), with their attached 96.5 points were the biggest surprise from the changes for 2011-12. Even the National Institute for Health and Clinical Execllence (NICE) committee responsible for devising new indicators were unaware of them until their publication.
Dr Gavin Jamie
Swindon GP
Webmaster of the QOF Database
The new quality practices indicators of the Quality and Outcomes Framework (QOF), with their attached 96.5 points were the biggest surprise from the changes for 2011-12. Even the National Institute for Health and Clinical Execllence (NICE) committee responsible for devising new indicators were unaware of them until their publication.
There are strict cut-off dates. It is vital that practices, groups of practices and local NHS organisations move rapidly to understand and implement them.
The guidance states that these are for one year only, although they may be extended for a second year if savings are demonstrated by October. It seems unlikely that there will be any evidence one way or the other by that date and a second year may be commissioned simply to gather additional evidence.
While there has been some suggestion in the past of a 'local QOF', where indicators can be agreed locally, this was to be based on picking from a menu of indicators. In all these new indicators most of the requirements are based on local agreement and in some cases thresholds are also to be set locally. One practical effect of this is that there will be little or no IT support and none of these indicators will be automatically assessed by the Quality Management and Analysis System (QMAS) or similar payment systems. Much will rely on good relations between practices and their contract holders.
In England the contract holders are the primary care trusts (PCTs) for the time being. In Scotland, Wales and Northern Ireland these are health boards. The official guidance uses the blanket term primary care organisation (PCO) which is used here.
All of the areas are dependent on good quality data about practice behaviour. Data will be available from the Prescription Pricing Authority (PPA), the Secondary Uses Service and from PCO records. In practice these external resources can be slow to update and are available only to PCOs, who will process the data before it gets to practices. Plans and targets must take account of the likely availability and timeliness of data at the end of the year.
The three areas covered by these indicators are prescribing, outpatient referrals and emergency attendances at secondary care. The indicators reward the reviewing of current data, identification of potential improvements, peer review with other practices, production and implementation of plans for change. In prescribing only there are points for specified prescribing behaviours and these vary depending on achievement. There are no points specifically for reducing the number of referrals or admissions.
The process to set these targets for prescribing is quite bureaucratic and it is important to get moving as the schedules are tight. The ultimate aim is to produce three indicators for prescribing, which are individual to each practice. These three indicators should be different to those currently agreed under the Medicines 6 and 10 indicators. Those older indicators continue to attract eight points. The newer indicators total 28 points.
The first stage is for practices to review their prescribing. All of the prescribers in the practice should be involved in this review. Three areas for improvement should be identified and a report produced summarising the discussions and listing the three areas to move through to the next stage. These three areas must be agreed by the PCO in writing before 30 June. Six points are awarded when this is agreed.
In the second stage these three areas need to be discussed with a group of other practices. Normally there should be at least six practices in this discussion, although in some areas geography may necessitate a smaller group. This would need the agreement of the PCO.
The purpose of this group is two-fold. First, prescribing should be compared among the practices in the group. Second, the group needs to define how the numerator and denominator will be calculated for each of the three areas identified by practices. Following agreement by the group, the plans must be agreed by the PCO before 30 September. Seven points are available for meeting this deadline.
Several factors need to be considered when setting the indicators. The first is information availability. Data from the PPA has traditionally been used as measures of practice prescribing, and the ePact report will be used to assess the indicators. The PPA measures dispensing rather than prescribing, although this has not traditionally been considered a significant issue.
Of more importance may be the significant lag before information is available. The guidelines specify that it will be the first three months (January to March) of 2012 that will be assessed. It seems unlikely that this data will be available on ePact before June with payment to follow after this.
ePact is not available to practices directly and all information will have to be supplied by the PCO. From October this information will be supplied monthly by the PCO as it becomes available.
There are other limitations in the form of the data from the PPA. Information on dispensing is given in terms of total number of items and also total cost. For the purposes of the QOF, only the item count is to be used. No indicator can therefore use the quantity of tablets dispensed. For instance, an indicator could not incentivise 28-day prescribing. Equally it is not possible to look at individual patient data. An indicator could not target prescribing for residential home residents or the use of specific drug combinations.
The use of an item count could cause other problems. Ratios of prescriptions dispensed have been popular indicators in prescribing incentive schemes. If a practice were to prescribe simvastatin in 30-day prescriptions and rosuvastatin in 90-day prescriptions there would be a three-to-one ratio of items without actually affecting medication taken by patients.
This is an example of a wider issue. Setting indicators that effectively incentivise the desired behaviour is hard – especially where there is no exception reporting. Indicators need to be carefully worded to avoid perverse incentives being built in.
[[GJ_1]]
The thresholds for each individual indicator are to be designed according to a set method. The top threshold is set at the level of the 75th centile at December 2010 – ie, the level of the top quarter of practices at this time. To get to the top level, the practice does not have to be in the top quartile of practices – just to meet the level that the top quartile met a year previously. The set level can be lower, but not higher, than this by agreement with the PCO.
The lower threshold will be set at 20% less than the upper threshold and each indicator will carry five points. So an indicator with a top threshold of 80% will give zero points at 60%, rising smoothly to 2.5 points at 70% and the full five points at 80%.
Practices will gain the points by their absolute achievement rather than any improvement. Practices with previously low prescribing costs will tend to gain the points more easily.
The indicators relating to outpatient referrals follow a similar pattern, though without the outcome indicators. The three stages of internal practice review, peer-group review and working towards agreed plans are implemented here.
Review of referrals within the practice will count for five points. Information about referrals should be supplied by the PCO as well as the extent to which this information is likely to shape the review. Data about outpatient appointments in acute trusts is fairly easy to come by through billing information and the Secondary Use Service but can have considerable delays. Data may not become available until up to six months following referral. It does not cover services provided out of hospital. As with the PPA data, this is not actually data about referrals but rather attendances. Patients are not covered if they have been diverted or did not attend the outpatient department.
PCOs may also have information directly about referrals through either Choose and Book or direct referral monitoring. Practices will need to discuss what information they want, and is potentially available, with their PCO. In many cases there may be rather less data to work from than practices would expect.
A report summarising these discussions submitted before the end of March 2012 will earn five points.
The second part is, again, to review referral information with at least five other local practices that refer to a group of similar providers. Differences should be compared across the group or the entire PCO area. While these will provide a valuable stimulation for review, it is important not to get too fixated on comparators. It is relatively easy to end up fixing the comparators without tackling the reasons behind them.
Comparators should be seen as a stimulus to further analysis and reflection, rather than an outcome measure or target.
The review should suggest both potential changes to referral behaviour and commissioning and service design improvements to the PCO. Each practice should produce a report summarising these discussions and detailing the suggested improvements by 31 March for five points.
The final part, carrying 11 points, concerns the development and implementation of care pathways within primary care. The explicit objective of these pathways is to avoid unnecessary attendances. It is expected that this will largely be led by the PCO but the practice must become involved in setting up these care pathways as part of their cluster of practices. This could involve attending meetings with other healthcare professionals to assist in the development of the pathways.
Three pathways need to be set up and implemented by the practice. Where a practice does not have direct influence on the setting up of pathways – eg, where referrals are dealt with by a referral centre, the practice will not be disadvantaged. The practice is expected to follow the pathways in all cases. If a practice does not follow the pathway in a particular case the reasons must be given in the annual report, which will also include information about how the practice implemented these pathways.
Keeping track of patients as they move through pathways could be a complex business, and practices need to ensure that both the paperwork and patient number remain manageable within the resources.
The final three indicators deal with emergency admissions – defined as admissions at short notice due to clinical need. The indicators are the same as those for outpatient referrals although the points available are significantly higher.
There are five points available for the review of data supplied by the PCO internally within the practice. Fifteen points are available for the review with a cluster of practices and service redesign proposals. A vast 27.5 points are available for the design and implementation of primary care pathways, although these pathways must be different to those in the outpatient indicator. While this may be the area in which practices have the least influence, it has almost as many points as all the other quality practice indicators put together. This reflects the importance with which these areas are held rather than the work involved.
There is a lot for practices to do and many words for them to write to gain these indicators. However, there is also a huge burden of work for PCOs. Significantly, there is a large amount of data that must be supplied to practices and clusters in a format that can be easily analysed. There are no nationally deployed tools that give the required data for prescribing, referral or admissions to practices so the onus is firmly on PCOs. There are few tools for the analysis of the data by practices.
PCOs will also be required to assess reports from practices and work with clusters to develop care pathways. This will be time-consuming. They will need to ensure that they have the staff and the skills to fulfill these requirements.
There is also a considerable amount of work for practices and, as always, practices should consider whether the rewards justify the effort. At least eight reports to the PCO will be required to gain all of the points in additional to clinical costs or attendance at meetings. Smaller practices may find this particularly difficult as the cash value of points is linked to list size, while the paperwork is pretty similar for all practices.
While these indicators have undoubtedly been influenced by the move to consortia, they have no defined role here. With regards grouping of practices into clusters, as with consortia the useful size is likely to be smaller than the majority of current potential consortia. These clusters must have at least six practices but may become unmanageable with much more than 10. Such groupings may exist in some areas but if you don't have one locally you should look to setting one up with other people whom you feel you can work with.
It seems likely that, at practice level at least, these indicators will dominate the commissioning agenda this year. There are already reports of PCOs dropping prescribing incentive schemes and the former practice-based commissioning agenda is likely to be replaced. The appetite, energy or resources for anything in addition, particularly commissioning, is likely to be very low. It will be as important for PCOs to engage as much as for practices.