Posts Tagged ‘hidden’
I have a puzzle for you: thousands of patients are apparently missing from the English waiting list. I don’t know where they are (though I’ll have a go at guessing), and I’m hoping some of you can help me.
Here’s the problem.
In principle, we should be able to start with, say, the 4-5 week waiters from the end-of-January waiting list, take away those patients who were admitted and non-admitted from the cohort during February, and (because February was exactly 4 weeks long) end up with an estimate of the 8-9 week waiters on the end-of-February waiting list.
That method would miss any patients who were removed without being seen or treated (for instance ‘validated’ patients who had been reported on the January waiting list in error), but that error should all be in one direction: to make the reported February figure smaller than our estimate. Patients cannot appear on the waiting list with several weeks on the clock out of thin air, can they? So our estimate, minus the reported end-of-February list, should always produce an anomaly that is positive and which reflects validation during February.
Sounds great. But if you actually do the sums you come across some oddities. Several, in fact, as you can see from the supposedly-impossible negative values in the chart below.
1) Missing very-short waiters
The first oddity is for the very shortest waiters. If you take the number of patients across England who have waited 1-2 weeks at the end of January, and knock off February’s admitted and non-admitted patients, then the expected number of 5-6 week waiters at the end of February should be no more than about 177,720. But in fact some 179,087 were reported in the end-of-February waiting list data: more than a thousand too many. That’s the small negative anomaly at 5-6 weeks in the chart above. A thousand-odd patients have appeared in the February figures out of thin air. Where did they come from?
They weren’t new referrals being treated immediately (they could only affect February’s 4-5 week cohort, which should really be part of this oddity as well). So they must only have appeared on the waiting list a week after referral. This, as far as I am aware, is quite common, because paper referrals are often graded for urgency by the consultant before being recorded on PAS, and this process can take as long as a week or two. So if that’s the explanation then that would explain the first oddity.
2) Missing 9-week waiters
The second oddity crops up at 8-10 weeks, and this is larger and more mysterious. At the end of January there were 233,003 patients on the waiting list who had waited 4-6 weeks since referral. After deducting the relevant admitted and non-admitted patients, you are left with an upper limit for 8-10 week waiters at the end of February of about 129,045. But in fact the reported figures show there were 144,434: some 15,389 too many, and causing the large negative anomaly in the chart. That’s a lot of patients suddenly appearing in the February figures. Where did they come from?
I don’t know the answer to this one, which is why I’m asking. But my guess is that this has something to do with cancer pathways. Could it be that some cancer patients are not being reported in the incomplete pathways statistics, but are being reported in the admitted and non-admitted figures? The NHS Standard Contract specifies that cancer patients should be treated within 62 days of referral, which is 9 weeks and coincides nearly enough with this anomaly. If large numbers of cancer patients are not being recorded in hospitals’ mainstream computer systems, which this explanation implies, then that in itself could be worrying because parallel and duplicate administrative systems can lead to patients getting lost.
3) Missing 17-week waiters
The third oddity is around 18 week waits. It isn’t large enough to appear as a negative anomaly in the national statistics charted above (though it does show as a step-change), but if you drill down to Trust level it does produce a negative anomaly for some individual Trusts. Because the cohort-tracking sums are inexact, and because quite a few Trusts crop up in this analysis, I am not going to name Trusts individually but instead will look at the overall pattern.
At some Trusts, the reported number of patients waiting 17-18 weeks at the end of February is higher than you would expect (a negative anomaly at Trust level), and they have no negative anomaly for 18-19 week waiters. In most cases the negative anomaly is small (or a small percentage). But in a handful of Trusts it does look significant; in other words significantly more patients are being reported just within the 18-week target than you would expect.
Again I don’t know what the explanation is, but my guess is that some Trusts (or some parts of some Trusts) might be applying clock pauses to their waiting list figures. That is strictly forbidden; the guidance says (emphasis in original):
“Clock pauses may be applied to incomplete/open pathways locally – to aid good waiting list management and to ensure patients are treated in order of clinical priority – however, adjustments must not be applied to either non-admitted or incomplete pathways RTT data reported in monthly RTT returns to the Department of Health.“
4) Disappearing 18-week breaches
The final oddity is just above the 18-week mark, and this anomaly goes in the opposite direction. From 18-22 weeks, the end-of-February waiting list is around half the expected size, so the anomaly is much more positive than expected.
My guess is that this is the result of waiting list validation being targeted at over-18-week waiters so that they don’t score against the admitted and non-admitted standards. This is a largely redundant tactic now that the main focus of the penalties, from April, is on incomplete pathways; Trusts today would be better advised to focus their validation efforts on patients approaching 18 weeks, rather than those who have already breached.
So there are four oddities in the data. If you can help explain any of them, or at least explain what is happening where you work, then do leave a comment below this post on the HSJ website (either anonymously or otherwise), or contact me in confidence by email or publicly on Twitter.
If you want to dive into the figures, you can download a spreadsheet that contains all the detailed calculations here.
A few more suggestions that have been put to me since I posted this:
Some missing waiters around the nine-week mark could be Choose & Book patients, who were told by C&B that no appointments were available and therefore raised an ASI (Appointment Slot Issue). Those patients might then be managed on paper by the hospital until their slot is arranged, which might take several weeks, during which they might not be reported as incomplete pathways. (Incidentally, this is a wasteful and risky administrative process, and the patient usually ends up in a similarly-dated slot to the one they would have had if C&B polling ranges had simply been extended.)
Some missing patients close to the 18-week mark at Trust level (though not at national level) are tertiary referrals. These arrive at the tertiary centre with time already on the clock (although there is now the option for the referring provider to take the ‘hit’ on any breaches caused by delays at their end: http://transparency.dh.gov.uk/files/2012/06/RTT-Reporting-patients-who-transfer-between-NHS-Trusts.pdf).
Here is a comment left at the HSJ website:
Anonymous | 2-May-2013 11:13 am
A few points come to mind in response to this article:
- As a general comment, early this (calendar) year, the impending financial penalties for >52 week waiters resulted in a flurry of activity to clear up waiting lists and address data quality issues. This almost certainly has created lots of apparent anomalies that are in fact data quality corrections.
- The >52 week penalties are contained in the standard NHS contract template – you will find that some CCGs have chosen not to include them in the final versions used for their providers. I think this may happen in situations where the provider is on a block contract. This is probably not a major factor though.
- My experience suggests that providers will not stop validating 18 week breaches against the clock stop targets – I am not sure any board or exec would simply not be worried about breaches that aren’t really breaches, financial penalty or not. It is still a core operational standard (as defined by the NTDA) so will still create a fuss if not achieved.
- as regards the missing very short waiters, grading for urgency by clinicans has definitley been known to take longer than 2 weeks. A less than one percent discrepancy could easily be explained by late grading and, probably more commonly, hospitals without single points of referral receipt not getting things on the system ina timely fashion e.g. letters going directly to med secs who sit on them for too long. If you know the patient won’t be seen for >10 weeks, why bother getting them on the system – this is the attitude in some cases at least!
Some of the differences between Scottish and English waiting times are pretty obvious. England has three 18-week referral-to-treatment targets and a 6-week diagnostic wait (pp.38 & 58), whereas Scotland has one 18-week referral-to-treatment target, a 6-week diagnostic wait, a 12 week inpatient/daycase Treatment Time Guarantee, and a non-legally-binding 12 week outpatient wait (p.5). Already we can see that it’s quite complicated in England, but even more complicated in Scotland.
If you dig into these targets you find the rules are different too. The differences are pretty big, and many patients who would have a right to short waiting times in England, enjoy no such guarantees in Scotland.
For instance, if you are referred to an English hospital then they have to accept the referral and treat you (unless they don’t provide that kind of care, or you agree to be treated elsewhere) (pp.7-8). But in Scotland the hospital can routinely send its patients just about anywhere it likes (p.16), even if the destination is way outside the boundaries of its Health Board; any patient who refuses can be taken off the waiting list or have their ‘clock’ reset to zero (p.17). In case you think that such long-distance transfers might be a rare event, Scottish Health Boards have regular arrangements to send increasingly large numbers of waiting list patients to the Golden Jubilee National Hospital west of Glasgow, even from as far away as Orkney (p.5).
You have to be ready at short notice in Scotland too, because the NHS considers seven days’ notice to be a “reasonable offer” (p.15), compared with three weeks in England (pp.34-35). (To protect urgent patients, hospitals can offer shorter-notice appointments in both nations, and patients are free to accept or reject them without penalty.)
And you should avoid changing your appointment in Scotland, even if you give them plenty of notice, because the hospital can use that as an opportunity to reset your clock to zero; if you change your appointment three times, they are normally expected to send you back to your GP (p.19). There are no such sanctions for changing appointments in England even if you give only short notice (p.28). In both nations, though, you can be taken off the list and sent back to your GP if you fail to attend your first outpatient appointment without giving notice (i.e. you ‘DNA’) (p.20, p.28).
If you are ever unavailable for treatment, either for medical or social reasons, then in Scotland your ‘clock’ is paused (p.22-25). This rule was very heavily applied (pp.10, 19) until a recent clampdown. In England the new main target (based on incomplete pathways: p.58) does not allow clock pausing at all, although clock pauses were certainly allowed and used against the previous main target.
Then there are patients who are completely excluded from the targets. For obvious reasons, both England and Scotland exclude obstetrics from their waiting time guarantees. If you are waiting for an organ transplant, then the wait for the organ itself is excluded in both nations. And if you want to become pregnant then assisted reproduction is covered in England, but not in Scotland. (p.13-4)
Both nations have short-wait guarantees for cancer outpatient appointments and initial treatment, but the English guarantee covers all cancers (pp.38-40) while in Scotland there are exclusions covering several cancer types (pp.15, 25-26). If you are having a course of cancer treatment then, in England, you are guaranteed your subsequent treatment within time limits, whether it’s surgery, chemotherapy or radiotherapy (pp.39-40); but there are no such guarantees in Scotland (p.5).
There are different exclusions in diagnostics as well. Scotland applies the 6-week guarantee only to eight key diagnostic tests (p.14), which means that English (but not Scottish) patients are guaranteed a 6-week wait for DEXA and various kinds of physiological measurement (p.8). However in both nations the diagnostic wait is part of the 18-week referral to treatment wait, so this may not make a massive difference in practice.
Why are the English rules apparently so much more patient-friendly and inclusive than the Scottish ones? I think the answer was right at the start: the nature of the waiting times targets.
In England, the overall targets have a tolerance, for instance that 92 per cent of patients on the waiting list must be within 18 weeks. That leaves an 8 per cent margin for the odd exceptions (and there will always be exceptions).
In Scotland, though, the legally-binding 12 week Treatment Time Guarantee is a 100 per cent target. There will still always be exceptions, so they must be allowed for in the rules; which means you need lots of rules.
Personally, I think the English approach is the better one. (And in case anyone north of the border is starting to suspect a national bias, I should say that I am Scottish and was born and brought up in Scotland.) Hard cases make bad law, and trying to define all the reasonable exceptions in the rules is inevitably going to be complex and imperfect. Better simply to allow a tolerance in the target and let the rules include everybody.
Imperial College Healthcare NHS Trust is in the news, with startling reports of a breakdown in record-keeping that resulted in patients waiting up to 2-3 years. Some of the patients who got lost in the system were suspected cancer referrals who the Trust is still trying to locate, months or even years later. It has been a horrible, stomach-churning failure.
To their credit, Imperial seem to be sorting things out pretty quickly: fixing the data, validating the waiting list, following up patients they are concerned about, clarifying scheduling procedures, and strengthening planning, all with external assistance and oversight. I don’t have inside knowledge of the actions they are taking, but it does look from the outside as if they are doing what you would expect.
Looking more broadly, how could the NHS become more resilient against this kind of failure? How can we make sure it never happens again and, if it does, that it is caught much more quickly to limit the damage?
Ultimately the answer is for any kind of waiting list to be regarded culturally as a sign of failure by the NHS, and to make involuntary waiting a thing of the past. But well before we reach that happy state there are more immediate and practical things we should do:
The first step is to simplify dramatically the reporting and targeting of waiting times. In common with most Trusts, Imperial’s scorecard in November 2011 (the last before their reporting break) tracked no fewer than eleven measures relating to the 18 week targets. Only one of those measures related to long-waiters still on the waiting list, and it was the second from last item. What were the other ten? Eight related to other waiting times targets set by the Department of Health, and the remaining two were Trust measures that simply tracked the numbers of patients being treated.
This proliferation is completely unnecessary. Get the waiting list right, and all the other measures take care of themselves. The Department of Health accepts the logic of scrapping the admitted and non-admitted targets, so let’s just do it. Then Imperial and everyone else can boil their 18 week reporting down to a single measure: the 92nd centile waiting time for incomplete pathways, so that Boards can see right away when things are going pear-shaped.
The second is to put an end to one-year waits. Patients don’t know where they stand with a 90 per cent guarantee (they are left wondering: am I one of the 10 per cent?). But if they know that nobody waits longer than a year then something is definitely wrong if they have. A one year limit works for hospitals too: if no patient ever waits longer than a year then systems are unlikely to slip for more than a few months (at the outside) before someone notices.
Thirdly, we can improve the tracking and management of the most important patients on the waiting list: no, not the imminent 18-week breaches, I mean patients with a high clinical urgency. There is a data field in each PAS system for recording the urgency of every patient on the waiting list: two week wait, urgent, or routine; but in many hospitals this field is poorly used. Using it consistently would strengthen waiting list management and reduce the risk of urgent patients being delayed.
Finally, and in the longer-term, we can increase resilience by strengthening patients’ expectations and involvement during their waits. To their credit, the Government have made a start on this with the Operating Framework requirement to publicise to patients the 18 week guarantee. But these generalities are not specific enough: even BT do better, with regular personalised text updates on the escalation and fixing of the fault on your line. If patients were kept closely in touch with progress on their appointments, then they would be better placed to catch the ball if it dropped. The usual system of fire-and-forget referrals, “you’ll get a letter” hand-offs, centralised complaints procedures, and all the rest is too distant and siloed and we can surely involve patients in a more predictable and personal service.
How pressing is all this? Around England, and particularly in London, there are plenty of hospitals reporting dozens (even hundreds) of patients still waiting more than a year after referral. How sure can we be that nothing similar is happening at any of them, or that none of those patients are waiting even longer than the 2-3 years found at Imperial?
The Government has listened, understood, and acted. The new RTT waiting times target is aimed directly at cutting the backlog of long-waiters, and elbows aside a target regime which actually punishes hospitals for treating long-waiting patients. The change, long called-for in this blog, is very welcome.
But no target is perfect. Targets always create problems of their own, distorting incentives and encouraging undesirable behaviours. Now that the perversities of the current regime have had their day, can we predict the nasties that the new target is going to throw up?
Happily, we don’t need to pull out our crystal ball. The new target is similar enough to the maximum waiting time targets of the 1990s that we just need to cast our minds back a few years. The two biggest problems then, and in the future, are likely to be distorted clinical priorities and hidden waiting lists.
Distorted clinical priorities
Point a TV camera at any NHS manager, and ask them: which is more important, clinical priorities or waiting time targets? They will rightly answer “clinical priorities”.
Now take the camera away, threaten them with loss of income or employment if they fail to treat their long-waiting patients, and turn a blind eye if clinical priorities are delayed. The consequences are as obvious as they are shameful. But delaying urgent patients to make room for long-waiters has happened before, and it may happen again.
Hidden waiting lists
Then there is the temptation to create “hidden” waiting lists, so that long-waiting patients don’t show up on the incomplete pathway figures.
This can be done blatantly (hiding referrals in drawers, creating “pending lists”, reclassifying patients as “planned”, or offering unreasonable appointments). Sometimes it happens through inattention (post-treatment follow-up backlogs). Sometimes it is the result of deliberate local policy (misusing low-effectiveness criteria to block or delay referrals).
So the new target, welcome though it is, leads us to new challenges and new dangers. They cannot be dealt with by national targets and national data collections; they must be tackled locally.
Good planning and management are clearly essential. But so is openness about local practices and policies; if patients and clinicians understand what is being done and why, you can be sure they will protest loudly and often if target-chasing ever dominates over basic fairness and clinical safety.