Posts Tagged ‘data’
I have a puzzle for you: thousands of patients are apparently missing from the English waiting list. I don’t know where they are (though I’ll have a go at guessing), and I’m hoping some of you can help me.
Here’s the problem.
In principle, we should be able to start with, say, the 4-5 week waiters from the end-of-January waiting list, take away those patients who were admitted and non-admitted from the cohort during February, and (because February was exactly 4 weeks long) end up with an estimate of the 8-9 week waiters on the end-of-February waiting list.
That method would miss any patients who were removed without being seen or treated (for instance ‘validated’ patients who had been reported on the January waiting list in error), but that error should all be in one direction: to make the reported February figure smaller than our estimate. Patients cannot appear on the waiting list with several weeks on the clock out of thin air, can they? So our estimate, minus the reported end-of-February list, should always produce an anomaly that is positive and which reflects validation during February.
Sounds great. But if you actually do the sums you come across some oddities. Several, in fact, as you can see from the supposedly-impossible negative values in the chart below.
1) Missing very-short waiters
The first oddity is for the very shortest waiters. If you take the number of patients across England who have waited 1-2 weeks at the end of January, and knock off February’s admitted and non-admitted patients, then the expected number of 5-6 week waiters at the end of February should be no more than about 177,720. But in fact some 179,087 were reported in the end-of-February waiting list data: more than a thousand too many. That’s the small negative anomaly at 5-6 weeks in the chart above. A thousand-odd patients have appeared in the February figures out of thin air. Where did they come from?
They weren’t new referrals being treated immediately (they could only affect February’s 4-5 week cohort, which should really be part of this oddity as well). So they must only have appeared on the waiting list a week after referral. This, as far as I am aware, is quite common, because paper referrals are often graded for urgency by the consultant before being recorded on PAS, and this process can take as long as a week or two. So if that’s the explanation then that would explain the first oddity.
2) Missing 9-week waiters
The second oddity crops up at 8-10 weeks, and this is larger and more mysterious. At the end of January there were 233,003 patients on the waiting list who had waited 4-6 weeks since referral. After deducting the relevant admitted and non-admitted patients, you are left with an upper limit for 8-10 week waiters at the end of February of about 129,045. But in fact the reported figures show there were 144,434: some 15,389 too many, and causing the large negative anomaly in the chart. That’s a lot of patients suddenly appearing in the February figures. Where did they come from?
I don’t know the answer to this one, which is why I’m asking. But my guess is that this has something to do with cancer pathways. Could it be that some cancer patients are not being reported in the incomplete pathways statistics, but are being reported in the admitted and non-admitted figures? The NHS Standard Contract specifies that cancer patients should be treated within 62 days of referral, which is 9 weeks and coincides nearly enough with this anomaly. If large numbers of cancer patients are not being recorded in hospitals’ mainstream computer systems, which this explanation implies, then that in itself could be worrying because parallel and duplicate administrative systems can lead to patients getting lost.
3) Missing 17-week waiters
The third oddity is around 18 week waits. It isn’t large enough to appear as a negative anomaly in the national statistics charted above (though it does show as a step-change), but if you drill down to Trust level it does produce a negative anomaly for some individual Trusts. Because the cohort-tracking sums are inexact, and because quite a few Trusts crop up in this analysis, I am not going to name Trusts individually but instead will look at the overall pattern.
At some Trusts, the reported number of patients waiting 17-18 weeks at the end of February is higher than you would expect (a negative anomaly at Trust level), and they have no negative anomaly for 18-19 week waiters. In most cases the negative anomaly is small (or a small percentage). But in a handful of Trusts it does look significant; in other words significantly more patients are being reported just within the 18-week target than you would expect.
Again I don’t know what the explanation is, but my guess is that some Trusts (or some parts of some Trusts) might be applying clock pauses to their waiting list figures. That is strictly forbidden; the guidance says (emphasis in original):
“Clock pauses may be applied to incomplete/open pathways locally – to aid good waiting list management and to ensure patients are treated in order of clinical priority – however, adjustments must not be applied to either non-admitted or incomplete pathways RTT data reported in monthly RTT returns to the Department of Health.“
4) Disappearing 18-week breaches
The final oddity is just above the 18-week mark, and this anomaly goes in the opposite direction. From 18-22 weeks, the end-of-February waiting list is around half the expected size, so the anomaly is much more positive than expected.
My guess is that this is the result of waiting list validation being targeted at over-18-week waiters so that they don’t score against the admitted and non-admitted standards. This is a largely redundant tactic now that the main focus of the penalties, from April, is on incomplete pathways; Trusts today would be better advised to focus their validation efforts on patients approaching 18 weeks, rather than those who have already breached.
So there are four oddities in the data. If you can help explain any of them, or at least explain what is happening where you work, then do leave a comment below this post on the HSJ website (either anonymously or otherwise), or contact me in confidence by email or publicly on Twitter.
If you want to dive into the figures, you can download a spreadsheet that contains all the detailed calculations here.
A few more suggestions that have been put to me since I posted this:
Some missing waiters around the nine-week mark could be Choose & Book patients, who were told by C&B that no appointments were available and therefore raised an ASI (Appointment Slot Issue). Those patients might then be managed on paper by the hospital until their slot is arranged, which might take several weeks, during which they might not be reported as incomplete pathways. (Incidentally, this is a wasteful and risky administrative process, and the patient usually ends up in a similarly-dated slot to the one they would have had if C&B polling ranges had simply been extended.)
Some missing patients close to the 18-week mark at Trust level (though not at national level) are tertiary referrals. These arrive at the tertiary centre with time already on the clock (although there is now the option for the referring provider to take the ‘hit’ on any breaches caused by delays at their end: http://transparency.dh.gov.uk/files/2012/06/RTT-Reporting-patients-who-transfer-between-NHS-Trusts.pdf).
Here is a comment left at the HSJ website:
Anonymous | 2-May-2013 11:13 am
A few points come to mind in response to this article:
- As a general comment, early this (calendar) year, the impending financial penalties for >52 week waiters resulted in a flurry of activity to clear up waiting lists and address data quality issues. This almost certainly has created lots of apparent anomalies that are in fact data quality corrections.
- The >52 week penalties are contained in the standard NHS contract template – you will find that some CCGs have chosen not to include them in the final versions used for their providers. I think this may happen in situations where the provider is on a block contract. This is probably not a major factor though.
- My experience suggests that providers will not stop validating 18 week breaches against the clock stop targets – I am not sure any board or exec would simply not be worried about breaches that aren’t really breaches, financial penalty or not. It is still a core operational standard (as defined by the NTDA) so will still create a fuss if not achieved.
- as regards the missing very short waiters, grading for urgency by clinicans has definitley been known to take longer than 2 weeks. A less than one percent discrepancy could easily be explained by late grading and, probably more commonly, hospitals without single points of referral receipt not getting things on the system ina timely fashion e.g. letters going directly to med secs who sit on them for too long. If you know the patient won’t be seen for >10 weeks, why bother getting them on the system – this is the attitude in some cases at least!
Official statistics aren’t perfect, and that goes for the waiting list too. Sometimes Trusts discover waiting lists that they should have been reporting, but weren’t. Sometimes they find problems with their data, take a ‘reporting break’ for a while, and then resume on a different basis. And data can also be discontinuous when Trusts are abolished and created, or when services shut down or move.
So stuff happens, and it all affects the reported number of patients on the waiting list. The question is: when you add up all these changes, could they explain the apparent growth in the English waiting list? Funnily enough it turns out that, yes, they could.
Here is the officially-reported number of patients on the English waiting list (count of incomplete pathways) since the 18-week target was achieved ‘properly’ in summer 2009. You may recognise this chart from my monthly reports on waiting times in England, and as you can see the red line is looking high for the time of year.
But if you trawl through all the detail at Trust-specialty level, and strip out any apparent step-changes in counting, the chart looks like this instead:
As if by magic, the increase has disappeared. It isn’t proof, but it’s enough to cast serious doubt on the apparent increase, and I think we can all be more relaxed about it. After adjustment, the size of the waiting list looks pretty stable year after year, and any increases and decreases are lost in the noise without any discernible trend.
You may be feeling sceptical at this point, which is perfectly reasonable. So now I’ll explain exactly how I adjusted the official figures to produce the second chart, and you can make your own mind up about the conclusions.
Fans of statistical process control may be thinking of 3-sigma variations or CUSUM charting at this point, but the problem with those methods is that they all rely on deviations from an intended or mean central value. But the size of a waiting list does not have a central value, so we need to use a different approach. Instead I applied two rules to detect steps that may be caused by counting changes; either:
1) the reported list size falls to zero, or rises from zero, which should detect new or closed services and ‘reporting holidays’; or
2) the average of the next 4 months differs from the average of the previous 4 months by more than 2 standard deviations (where standard deviation is measured month by month over the whole time series), which should detect ‘newly-discovered’ waiting lists and major validation exercises.
The two tests were applied month by month to list size data from August 2009 to January 2013, at Trust-specialty level, which is the most granular data publicly available and therefore gives the best chance of detecting service-level changes. Steps in the data were detected in 2.4 per cent of months, which is equivalent to a step-change every 3.5 years at Trust-specialty level.
The data trawl was based on the current list of Trusts, so further adjustments were made for Trusts who existed in the March 2012 data but not the following month (principally pre-merger Barts). No Trusts disappeared from the data series in the month following March 2011 or March 2010.
If you have ever tried to detect anomalous deviations in time series data, you will know how frustrating it is. Sometimes your eye tells you there is a screaming change in the data, but your formula doesn’t pick it up. Other times your formula picks up a deviation that your eye tells you is just noise. The eye is very good at pattern-recognition, but it is also subjective, easily-led, and gets tired. So with 2,622 Trust-specialties to trawl, it’s better to let the computer do the work and hope the errors come out in the wash.
Let’s take a look at some examples of steps detected by the two rules. In each chart, the blue line is the list size (count of incomplete pathways) for one specialty in one Trust, and the yellow column indicates where a step up or down has been detected by the rules.
Here is a new Trust coming into existence:
Here the size of waiting list steps up, perhaps after the Trust discovered an unrecorded waiting list:
In this one, a Trust discovered a problem with its waiting list data, took a ‘reporting holiday’, and resumed reporting with corrected data:
I mentioned that sometimes the eyeball and the computer disagree with each other, and here are a couple of examples. Firstly, here is an example where the computer detected a step but the eyeball says it’s just noise:
And here is some data where the eyeball says this is a service that is being progressively shut down. The algorithm, however, doesn’t detect the early stages of the closure because the standard deviation is so high that the steps don’t exceed the two-sigma threshold, and only the final closure down to zero is detected.
To end the examples on a positive, here is some noisy data where no steps are detected by either the computer or the eyeball.
Whenever a step is detected, the later data is assumed to be correct, and all months prior to the step are adjusted by the size of the step. For instance, if the waiting list steps-up by 1,000 patients in June 2011, then all months prior to June 2011 are adjusted by adding 1,000 patients.
The total size of the adjustments across all Trusts and specialties is:
The adjustments made are shown by the green line and, as we saw, they are enough to put the waiting list on the same path as in previous years. Given that the total list size is a decent leading indicator of long-wait pressures feeding through, that would indicate that (at least so far) pressure is not building on the waiting list itself.
The constant caveat, of course, is that the list size does not tell the whole story because referral restrictions may be holding up patients before they get that far.
UPDATE: This methodology is now incorporated into my regular monthly analysis of the English waiting list, with a couple of differences. Firstly, independent sector providers will be included. Secondly, hospitals admitting fewer than 50 patients in the most recent month will be excluded. The overall conclusions remain the same despite the changes.
Gooroo Planner has always been good at bulk analysis: load up dozens or even hundreds of service lines, and it will rip through them in a matter of seconds and do all your planning for you. That means you can generate scenarios rapidly, across your whole hospital or health economy, covering all the different activity or performance scenarios you want to investigate.
But sometimes you want to dive into one particular service – usually a big one like medical emergencies or orthopaedic electives – and tinker. What if this was our waiting time target? What if our length of stay was that? If we did some extra outpatients, what would be the knock-on effect for inpatients? In a situation like this you want to be able to fiddle around with the numbers, try this change and that change, and just see what happens. Not so easy when the calculations are all run in batches, but dead easy with the new report Editing screen.
When you’re in the main Report view, there is now a new link at the top called “Editing”. Click it, and now you can tinker and mess with the data, a service at a time, and see instantly “what would happen if…”. Like the week-by-week profiling screen, it’s designed to be easy to use by operational managers and anybody else whose job isn’t necessarily heavy on spreadsheets and databases. Information and planning analysts can load up the main data, and perhaps run the main report too, and then let operational managers seek out that sweet spot between waiting times targets and available resources.
So how do you use Editing? From the main reports view, click the Editing link above the table. The first step is to select the service you want to edit, using the control that looks like this:
In this example, the services in your model have been described using three headers: hospital site, specialty and admission type; the models you use in real life may have fewer or more headers than this. In this example the service you want to pick is orthopaedic elective inpatients on the main hospital site. So use the left hand drop-down to select the main site, then click the Specialty button above it to change the drop-down to show specialties, and use the drop-down again to select orthopaedics, and finally click the Admission type button and then use the dropdown to select elective inpatients. When you’ve selected a unique service, the “Run Editing” button appears, so click that.
If this is your first time using Editing, you’ll need to choose the dataset fields (i.e. the data going in) and results fields (i.e. the results coming out) that you want to look at, using these two drop-downs:
You can select as many fields as you like from each list by ticking them. For instance your dataset drop-down might look something like this when you’re selecting the data you want to change:
When you’ve finished selecting dataset and results fields, click the “refresh” button (with the green arrows on it) to update the display. Then, depending on which items you’ve selected, you might see something that looks a bit like this:
Now you’re ready to start tinkering with the numbers. You can enter new values in the “New” boxes in the left-hand (Data) section and, when you click the “Calculate” button above the Results section, the new results values will appear under Results. You can also click any line in the Results section to see a waterfall chart showing the effect of successive changes on the result you chose.
It’s worth remembering that everything is calculated using this report’s existing Calculation Settings, including the activity scenario that determines whether future activity is going to carry on at the past rate, or match demand, or achieve targets, or achieve some other objective. So for instance, here we have changed the target waiting time from 9 weeks to 8 weeks, in a report where the activity scenario was set to achieve the waiting list targets; we’ve chosen the list size as the value to display on the waterfall chart:
You can see that changing the waiting time target has reduced the list size required to achieve that target (the red column shows that the list size has reduced). But what (you may be wondering) is the blank space on the chart labelled “Meeting of 20 Nov”? That is there to show what happens when you want to save a bundle of changes that you’ve made. If you click the “Save” button then the changes you have made are written back to the Report, and you can create an audit log describing the change. This audit log helps you keep track of successive changes if, for instance, you are tracking the negotiation between a commissioner and provider, and whatever you enter as the title of the audit log appears on the axis of the waterfall chart. You can revisit previous changes using the “Select a previous change” drop-down (see the top of the previous image); it is clear from this waterfall chart that, whatever was decided on 20 Nov, it did not affect the list size.
If you don’t want to save the changes you’ve made then just change the service selected, or use your web browser to navigate away from this screen. Changes are only written back to the report if you click Save (and the original dataset is never changed at all by the Editing screen). Similarly, your selection of displayed dataset and results fields is only saved when you click Save, so if you change your selection temporarily and don’t want it preserved for your next visit then just don’t click Save.
That’s pretty much it. One more thing… if you are editing a report that has knock-ons switched on (so that, for instance, increases in outpatient activity are reflected in increased demand for elective services) then changes to one service may affect the results for another service. Those knock-on effects won’t be immediately visible on the Editing screen, but a warning that this may happen will appear in the audit log, and you can always see the knock-on effects either back in the main reports view or by editing the affected service.
Of all the referral-to-treatment (RTT) waiting times targets, the toughest is currently the “90 per cent” target. This requires 90 per cent of patients to have waited less than 18 weeks as they are admitted, on an adjusted basis. Adjusted, that is, for clock pauses.
I must confess, I had always assumed that clock pauses have only a minor effect. There might be one or two Trusts, I thought, where clock pauses were (shall we say) giving the adjusted admitted target a fair wind. So I was really quite taken aback when I looked at the evidence.
Clock pauses are only allowed in limited and defined circumstances. According to the RTT Rules Suite (p.22, my emphasis):
Clocks may only be paused for patient initiated delays at the admission for treatment stage of the waiting time pathway.
Once a decision to admit has been made, patients should, of course, be offered the earliest available dates to come in, as appropriate. However, where patients decline these offers, then, for a clock to be paused, they must be offered at least 2 reasonable dates for admission. Reasonable is defined as an offer of an appointment with at least 3 weeks notice.
Not much scope, you might think, for widespread pausing, or for provider-initiated pausing to help achieve the target. So how much are clock pauses actually used, and what effect do they have on adjusted admitted waiting times?
In the following chart, each point represents one specialty at one Trust, and it shows all Trust-specialties where at least 50 patients were admitted during June 2012. The position along the x-axis shows the 90th centile adjusted admitted RTT waiting time; i.e. the waiting time exceeded by only 10 per cent of patients, measured from referral to admission with clock pauses deducted. The position up the y-axis shows how much time was deducted for clock pauses, compared with the 90th centile unadjusted admitted RTT waiting time.
Do you think that an alien, looking at this chart, might be able to guess what the adjusted admitted target is?
You have to admire the accuracy with which so many services are achieving 18 weeks, with exactly the right amount of clock pausing.
It is also striking how much more common clock pauses are, in those services that are only just achieving the 18 week target. For services that lie between 17 and 18 weeks, some 42 per cent include at least one week of clock pauses; for the rest, the figure is just 24 per cent. Looking at it another way, the 17-18 weekers include an average 1.5 weeks of clock pauses, and the rest just 0.7 weeks.
Let’s drill down into one specialty in one Trust where the impact of clock pauses is especially clear. In the chart below, the unadjusted admissions are shown by the solid red columns, and the adjusted admissions by the solid red line (data from the Department of Health).
The gap between the line and the columns shows the net number of clock pauses: i.e. the number being paused minus the number coming off pause. There are no net pauses at all below 15 weeks, then 39 net pauses between 15 and 18 weeks, and then above 18 weeks they all start coming off pause again.
If this service had paused only 37 patients instead of 39, it would have failed the target. By a remarkable coincidence, it has achieved the target by a similarly narrow margin every single month for the last three years; the extent of clock pausing varies, but the adjusted result remains the same.
I am not making a blanket accusation that any service, that narrowly achieves the adjusted admitted target with just the right level of clock pauses, is misusing clock pauses in order to achieve the target. But I think it is fairly clear that some of them probably are, and some systematically.
Does it matter? Yes, but not as much as it used to, because the recently-introduced incomplete pathways target does not allow clock pauses to be deducted. If that target ever achieves the primacy it deserves over the adjusted admitted target, then pauses will become largely irrelevant. Normal levels of patient-initiated pauses (which, as we saw in the first chart, do not have a big impact on waiting times) will be absorbed within 18 weeks and the 8 per cent tolerance on incomplete pathways.
Even as the targets stand today, any service with a very high level of clock pauses will still breach the incomplete pathways target (as the example above does). Unless, of course, a service decides to adjust the incomplete pathways for pauses too. That isn’t allowed, but it does happen; how else could you explain the chart below, in which long-waiting patients are apparently being admitted even though there are no long-waiting patients on the list (and weren’t the month before, either)?
(The Department of Health has just published the checks they run across all the monthly RTT data submitted by Trusts, including checks on clock pauses. You can download the document “RTT Assurance Data Checks (PDF, 54K)” here.)
The first Gooroo user group is being set up for the East Midlands and surrounding areas, where we have a growing cluster of NHS organisations using Gooroo’s planning and scheduling software.
Meetings will be held three times a year, and attendance is free of charge. The first will be on Monday 1st October from 2pm to 4:30pm in Teaching Room 5 of the Education Centre at Derby Hospital. If you’re a current or potential Gooroo user and would like to come along, then you are very welcome, and should email firstname.lastname@example.org to add your name to the mailing list.
The second user group is already being set up in Scotland, and again if you’d like to come then please email us. The first meeting will probably be in late October in Stirling.
If you are a Gooroo user somewhere else in the country, and would like a user group to be established in your area, then please let us know and we’ll see what we can do.
What is the “demand” for a waiting list service?
We could define demand as being the same as additions to the waiting list: then it would match referrals if we were looking at outpatients, or decisions to admit if we were looking at elective admitted patients. But would that be a useful definition? A lot of patients who are added to the waiting list never get treated, because instead they end up being removed from the waiting list for other reasons (for example, they change their mind, or are removed because they DNA). If we were to lay on enough activity to match additions to the list, then our waiting list would actually shrink because of the removals, which would be nice, but not necessarily what we intended if we merely wanted to keep up with demand. So defining demand as being the same as additions isn’t necessarily the most useful approach.
Instead, Gooroo Planner defines “demand” as the number of patients added to the waiting list who will eventually end up as activity. This means that demand and activity can be compared directly: recurring activity is whatever is needed to keep up with demand, and non-recurring activity is anything extra.
The next question, then, is how should we measure demand? Traditionally we have calculated historical demand as being historical activity plus the growth in waiting list, adjusted for removals. This was reasonable in the days when the size of waiting list was scrutinised centrally and agonized over locally, when historical list size data was readily available, and when changes in list size usually reflected reality. In those days, additions data was not closely watched, and was usually less accurate than changes in the list size. However, none of this can be taken for granted now.
Nowadays, all stages of the patient pathway are linked up to track referral-to-treatment waiting times, and this has helpfully improved the accuracy of additions data. At the same time, investments in IT have sometimes meant that changes in list size reflect administrative actions, not an imbalance between activity and demand; new IT systems can lead to short-term errors in counting, and then one-off waiting list validation exercises can cause dramatic apparent cuts in the list size. There is another problem too: if waiting list snapshots are not regularly archived over time, it can be difficult to recreate this data afterwards. Counting the patients on the waiting list today is much easier than working out how many were on the list a year ago.
What is the solution? An good alternative method for calculating demand might ignore past changes in the size of waiting list, and instead calculate demand as additions less removals. We know that additions are always balanced by activity, removals and the change in list size, so the maths must be fairly straighforward. Indeed it is, and if you want the details they are all laid out in the link below.
Starting in a few weeks time, users of Gooroo Planner will have a choice between these two methods for calculating demand: either based on activity plus growth in list size (if list size data is more reliable), or on additions less removals (if additions data is more reliable).
If you want to make a permanent choice of method and use it always, then you will be able to set that up in your profile under Dataset Settings. Alternatively you can choose between the two methods on a case by case basis, and even use different methods for different services within the same model.
Before you ask, yes the change will be “backwards compatible” and your existing datasets will still work fine. That’s the beauty of Gooroo Planner’s flexible design: we can add the new without overturning the old.
Follow this link for the maths: Demand calculation method explained
Step forward Andy Bailey, Information & Contracts Manager at Sheffield Teaching Hospitals NHS Foundation Trust. With characteristic modesty, he comments: “Not sure it’s anything special, but better for users to create their own datasets rather than going through me.”
Well, it is certainly better if Gooroo users don’t have to bother an information analyst every time they want to do some planning. But as for it not being anything special, well I would disagree. So what is it?
As Andy explains to Gooroo users around the Trust: “it will allow you to generate the core dataset and make alterations as you see fit. The screen shot below provides you with the gist of how it works, you choose some dates, run the report and the table below will appear. Any row can be updated thereby allowing you to tweak LOS, theatre minutes, etc (if you’re unhappy with the proxies). When you press the ‘export data’ button, a csv file will be generated and you’ll be prompted to save it somewhere. It’s this file that you can then upload into the Gooroo system.”
In the screenshot below I have obscured the actual numbers for privacy:
In the old days (i.e. a few years ago) hospitals used to be places where 2,000 people put information into the computer system but only 5 people could get it back out again. Not any more. Increasingly, NHS Trusts and Commissioners have modern data warehouses with user-friendly interfaces, so that managers in all parts of the organisation can pull the data out themselves.
Andy’s interface is a great example of this. This apparently-simple dataset is enough to run multiple activity scenarios through Gooroo Planner, to work out waiting times, waiting lists, activity, beds, and theatres. By automating the information analyst’s role, Andy has saved himself a lot of work, and removed himself as a possible bottleneck when others want to do some planning.
You’ve thrown the data in, picked an activity scenario, and now you want to see the results.
More than that, you’ve loaded up the entire hospital – a couple of hundred service lines in all. So you’re slightly dreading the massive table – over ten thousand numbers – that will make up your detailed plan for the coming year.
You needn’t have worried, because Gooroo Planner’s brand-new Report viewer makes it all digestible.
Want to see the biggest waiting lists? Just click the row you want to sort (or use the drop-down), and select your sort order.
Want to see Orthopaedics? Just type “ortho” into the filter box.
Want to subtotal across hospital sites and specialties to see the big picture? Just un-tick the headers you want to subtotal across.
Want to use all these features at the same time? No problem: just click Apply.
The new Report viewer means that chucking the numbers around is now a lot easier. So you can quickly pick out the detail that matters, without losing sight of the big picture. You get the power of a database, yet the controls are simpler than a spreadsheet.
To see this and everything else about Gooroo, just get in touch: email email@example.com or phone 01743 232149.
How did the HSJ (“NHS reports strong performance on 18 weeks targets”) and the Guardian (“Number of NHS patients waiting over 18 weeks for treatment up 27%”) manage to draw opposite conclusions from the same waiting times statistics?
The Guardian explained its numbers thus:
A total of 26,417 people in England waited more than 18 weeks to be treated in February this year compared to 20,662 in May 2010, when the government was formed – a 27% rise.
Looking at the data (spreadsheet here), we can see where those figures came from. 26,417 is the number of patients admitted as inpatients and daycases, during February 2012, who had waited over 18 weeks (adjusted for clock pauses) before being treated. 20,662 is the corresponding figure for May 2010, the month of the General Election.
That’s a 27.9 per cent increase. But an increase in what? Not in the “number of NHS patients waiting”: if you look at the waiting list figures (the so-called “incomplete pathways”), you find that the number of over-18-week waiters still on the waiting list fell by 16 per cent over the same period, from 209,411 to 175,549 (which, as it happens, is an all-time low).
No, the increase was in the number of over-18-week waiters being treated, and at this point we need to remind ourselves that treating long-waiting patients is a good thing (and certainly much better than leaving them on the waiting list). The NHS in England has recently been treating a lot more long-waiters in an effort to clear the over-18-week backlog: in the year to February 2012 some 337,264 over-18-week waiters were admitted (9.3 per cent of all admissions), compared with only 283,128 (7.8 per cent of admissions) in the previous 12 months.
So the Guardian headline needs a bit of adjusting. Using the same figures it could have said “Number of NHS patients treated after waiting over 18 weeks up 27%”. Or, to put the focus on “NHS patients waiting”, it might have read “Number of NHS patients waiting over 18 weeks for treatment down 16% to record low”. Either way, it’s hardly “a huge embarrassment”.
A similar confusion over the figures popped up elsewhere this week, with the CQC’s large-scale survey of inpatients reporting that waiting times had gone up. Again, the figures show that the number of long-waiters picked up in the inpatients survey had increased, which again is a measure of long-waiters being treated not of long-waiters still waiting.
What is surprising about the inpatient survey is the very high proportion (14 per cent) reporting that they had waited longer than six months, when according to the national RTT statistics the figure for the same period (October 2011 to January 2012) was only 3 per cent. Perhaps the answer lies in the wording of the question: according to the summary report “the survey asked respondents how long they had to wait to be admitted to hospital, from the time they first talked to a health professional about being referred for a hospital admission”. This isn’t quite the same as the waiting time from referral to treatment, which may (or may not) explain the difference.
So what’s the verdict: “strong”, or an “embarrassment”? Looking at the waiting list, in February 2012 the numbers of patients waiting longer than 18, 26, 39 and 52 weeks were the lowest ever recorded. So were the 90th, 92nd and 95th centile waiting times. So (not that it means very much) were the mean and median waiting times. A “strong” performance? I hope we can all agree that it is.
Where are the longest waits? What are waiting times like in your local NHS? How difficult is the new waiting time target? Here are some maps to help you find the answers.
All the maps are interactive: you can zoom and scroll, click on the pins for details in a balloon, and click the title in the balloon for a full analysis.
The first pair of maps is intended for journalists and the public. It highlights the longest-waiters, and you can click on the pins for year-on-year comparisons of the total number waiting, 18 week waiters and 52 week waiters. All data is for all specialties combined (see below for specialty-level data).
The second pair of maps is designed more for NHS managers and clinicians. It looks at the challenge of achieving the new RTT waiting times target, and the pins show the waiting time achieved by 92 per cent of the waiting list (the new target for this measure is 18 weeks). Click on the pins to see estimates of how hard it will be to achieve the new target, both with and without improving patient scheduling. For more details about the methodology see our earlier blog post on the new target. All data is for all specialties combined, and the analysis therefore assumes that resources can be deployed flexibly between specialties.
To drill down to specialty level, or to jump straight to a particular Trust or PCT, you will find a full set of detailed reports at the Gooroo website.
Full analysis by Trust/PCT and by specialty: All 18 week reports at specialty level