Posts Tagged ‘pathway’
The latest RTT waiting times stats for England came out last week, so what do they tell us about local waiting times at Trusts and PCTs?
If you want analysis for a particular Trust or PCT, by specialty, then you can look them up here. Or for an all-specialties view you can drill down through these interactive maps; just click a pin to get year-on-year comparisons of the total waiting list, over-18 week waits, and over-one-year waits. The pin colours show the number of over-one-year waiters.
The toughest backlogs
The maps show where the backlogs are, but how difficult will they be to clear? To answer this, we need a more detailed analysis, and we also have to make some assumptions because not all the data we need is published.
We’re working at all-specialties level, so we are assuming that resources can be allocated in ways that even out the pressures. We’re working with RTT waiting times, so we are assuming that all stages of pathway can be optimally managed. We’ve made estimates around addition rates, urgency rates, removal and cancellation rates, and patient booking tactics. Finally, we’ve assumed that waiting lists are being accurately reported to the Department of Health. That’s quite a lot of assumptions, so the results are illustrative not definitive. Nevertheless they are interesting.
Let’s look at how difficult it will be for Trusts and PCTs to achieve the new, improved target that 92 per cent of the waiting list (incomplete pathways) must be within 18 weeks.
If Trusts and PCTs address their backlogs by treating them in a “waiting list initiative” (aka “chopping the tail off the waiting list”), then they are almost certainly going to be wasting money. Our analysis suggests that 90 per cent of Trusts and PCTs should be able to achieve the new target without reducing the size of their waiting list. Many of the rest have only a trivial backlog to clear: less than two days’ work. Also, many of the Trusts with apparent pressures have suspected or known data reporting problems linked to the installation of new IT systems, which means that much of the apparent backlog will eventually evaporate through waiting list validation.
With all those caveats, then, here are the PCTs with the biggest backlog-clearance challenges ahead of them. The numbers show the number of working days’ activity needed to clear the backlog, even after good waiting list management has been put in place:
- Wirral PCT: 33 days
- Somerset PCT: 18 days
- Bath and North East Somerset PCT: 10 days
- Blackpool PCT: 6 days
- Oxfordshire PCT: 6 days
- Croydon PCT: 4 days
- Warwickshire PCT: 4 days
- Great Yarmouth and Waveney PCT: 2 days
The list of highly-pressured Trusts, unsurprisingly, shows some overlap:
- Wirral University Teaching Hospital NHS Foundation Trust: 39 days
- Taunton and Somerset NHS Foundation Trust: 24 days
- Mid Staffordshire NHS Foundation Trust: 22 days
- South Warwickshire NHS Foundation Trust: 21 days
- Royal United Hospital Bath NHS Trust: 20 days
- The Robert Jones And Agnes Hunt Orthopaedic Hospital NHS Foundation Trust: 13 days
- Weston Area Health NHS Trust: 11 days
- Winchester and Eastleigh Healthcare NHS Trust: 9 days
- Oxford Radcliffe Hospitals NHS Trust: 8 days
- Yeovil District Hospital NHS Foundation Trust: 8 days
- Blackpool Teaching Hospitals NHS Foundation Trust: 7 days
- Imperial College Healthcare NHS Trust: 5 days
- Queen Victoria Hospital NHS Foundation Trust: 4 days
- Croydon Health Services NHS Trust: 3 days
- Bradford Teaching Hospitals NHS Foundation Trust: 2 days
- Tameside Hospital NHS Foundation Trust: 2 days
- James Paget University Hospitals NHS Foundation Trust: 2 days
Where time stands still
I’d like to pick out one of these Trusts because there is something strange about its waiting list. The wonderfully-named “The Robert Jones And Agnes Hunt Orthopaedic Hospital NHS Foundation Trust” (usually shortened to the RJAH) is a lovely specialist Orthopaedic hospital in the Welsh borders, set in beautiful hilly countryside. Their waiting list has a hill in it too, quite a big one, and it looks like this (see the dotted red line; data from the Department of Health):
That’s quite a peak. Luckily for the Trust, it lay just below 18 weeks in November, so they just managed to achieve their admitted patient target (90% within 17.8 weeks) and non-admitted patient target (95% within 18.0 weeks) during November. But how are they going to cope in December, when the peak has moved on and will be hitting 22 weeks?
Except that this peak isn’t going to move on. Curiously, it always stands still. Here is is the previous month’s peak, in October:
Like the Welsh hills around the hospital, this peak stays where it is. It has remained in exactly the same place ever since it first appeared from nowhere in October 2009. Actually, not quite from nowhere, because one can hardly help noticing that the Trust’s over-one-year waiters disappeared at exactly the same time. Here is the moment the peak appeared:
How are we to explain this phenomenon?
It can’t be clock pauses, because incomplete pathways data is not supposed to be adjusted for pauses. I wrote to the Trust a month ago to offer them an opportunity to provide an explanation, but they have not responded. So as things stand, I am struggling to think of an innocent explanation, and if anybody can come up with one then I’d like to hear it.
Lucky, lucky NHS: eight referral-to-treatment waiting time targets when just one would do a better job. All over England, Trusts are complaining about the irrelevance and perversity of the target regime, but their pleas are (mainly) falling on deaf ears. Higher up the system, performance managers want green boxes, only green boxes on the RAG (red-amber-green) ratings. You shall achieve the targets, they insist: all eight of them.
Here, then, is a little helping hand with that negotiation. You can offer them their green boxes, with pleasure. But it’s going to cost them, and we’re going to show how you can work out the bill.
1) 95th centile RTT waiting time for incomplete pathways (target: 28 weeks)
Let’s start with the only target that is actually sensible: the 95th centile referral-to-treatment (RTT) waiting time for incomplete pathways (or, in plainer English, the waiting time that the top 5 per cent of the waiting list has exceeded). It’s sensible because it delivers the third of the four key principles of good waiting list management, and (with good planning and monitoring) is relatively straightforward to implement without undermining the others.
The four principles are:
- treat patients with higher clinical priority first
- treat patients with similar clinical priority in turn
- treat the least-urgent patients within a reasonable time
- don’t waste capacity
But how do we actually work out the activity needed to achieve this particular target? It isn’t easy, and it took many years of research to find a good solution to this problem. The first step in the calculation is the hardest: working out the size of waiting list that is consistent with the waiting times target. When that’s done, the remaining steps aren’t too bad. For the sake of this post, we’ll assume you have a well-researched model that does all this for you.
Once you have a suitable model, the calculation is easy: you just specify the target and let the model take care of everything. In Gooroo Planner, for instance, you can either load up targets for every service separately, or not bother and just set up default values like this:
To give an indication of how tough each of the targets is, we will use a benchmark waiting list that is well-managed according to the four principles above, and show how it has to get smaller and smaller as each new target is applied (keeping all other attributes of this benchmark list constant: addition rate, cancellations, urgency, etc).
To achieve 95 per cent of incomplete pathways within 28 weeks, sustainably and safely, without taking any of the other targets into account at this stage, our benchmark list starts out with 200 patients on it. We need two copies of this benchmark list now, one for admitted and one for non-admitted pathways, and we will track the fates of those two benchmark lists below.
2) 95th centile RTT waiting time for admitted patients (target: 23 weeks)
The next six targets we are going to look at are all based on those patients who were lucky enough to be treated or discharged over the chosen time period, as opposed to those patients who are still waiting. The trouble with these targets is that Trusts can achieve them by being selective about which patients they choose to treat.
For instance, any Trust could achieve 95 per cent of admissions within 23 weeks, cost-free, simply by picking 19 short-waiting patients for admission before picking an over-23-week waiter. This would violate the second principle: that patients with similar clinical priority should be treated in turn.
But on the assumption that you want to do the job properly, by actually achieving short waits on the waiting list as well as in your admissions profile, this target is easy to model in Gooroo Planner. Just set the data up like this:
It turns out that this target is more challenging to achieve sustainably than the incomplete pathways target above, and our benchmark waiting list (applied now to admitted patient pathways) must have no more than 155 patients on it.
That means we can now ignore the incomplete pathways target above because, if we achieve this admitted patient target while following the principles of good waiting list management, then we will have a small enough waiting list to automatically achieve the incomplete pathways target too.
3) Percentage admitted within 18 weeks RTT, adjusted basis (target: 90 per cent)
This is the best-known of all the RTT waiting time targets, though it too suffers from the problem that it is easy to achieve if you abandon patients who have already exceeded 18 weeks.
If you want to achieve it safely and sustainably, it is similarly easy to model:
Now our benchmark admitted-pathway list must not exceed 132 patients, if well-managed, and we can forget about the previous target too as it will automatically be met if we achieve this one.
4) 95th centile RTT waiting time for non-admitted patients (target: 18.3 weeks)
This target has all the same perverse incentives as the admitted patient targets above. If, again, we assume that we will do the job properly and manage the waiting list well, it is easy to model safely and sustainably:
If our benchmark list is now a non-admitted pathway, it must not exceed 129 patients.
5) Percentage non-admitted within 18 weeks RTT (target: 95 per cent)
This target duplicates the target above, but with a slightly tougher limit of 95 per cent within 18 weeks instead of 18.3 weeks. Originally this target (together with the percentage admitted within 18 weeks) was going to be dropped, but they had to be reinstated as they are both laid down in law.
You can model this target easily, just like the previous target but with 18 weeks instead of 18.3. Our benchmark non-admitted list now must shrink a little further to 127 patients.
6) Median RTT waiting time for admitted patients (target: 11.1 weeks)
The median targets make things a little more complicated to model, and to understand. But we are looking for ways to have a sensible discussion about the costs of achieving green boxes right across our 8 RTT targets, so let’s dive in and find a way to do it.
What is meant by this median target? If we look at the waiting times experienced by patients admitted over a period of time, the median admitted waiting time is the waiting time that half of them exceeded. If we were managing our waiting list well, according to the four principles, what would the median be then?
In all the main surgical specialties, only a minority of patients are clinically urgent. The remaining majority, who are non-urgent, should be admitted in turn and therefore all of them should experience roughly the same waiting time. The median patient and the 95th centile patient are both among this majority, and should therefore experience similar waiting times; so it follows that the median waiting time should be close to the 95th centile waiting time if we are managing our waiting list well.
But, for admitted patients, the targets are asking for a 95th centile of 23 weeks, and a median of only 11.1 weeks. How can we achieve that? Quite easily, as it turns out, although it does require us to violate the principles of good waiting list management. All we need to do is pick a lot of non-urgent (i.e. routine) patients and expedite them, for no other reason than to meet the target. Yes, that is brutally unfair on the other routine patients, who will wait longer as a result, and we can put that argument to the people who enforce the targets. But if they want all their boxes to be green, that is what they are going to get.
To model this target, we can pretend that half our patients are urgent and need admission within 11.1 weeks. They aren’t, but that is what the target demands. We just leave our long-wait target at the most demanding level we discovered above (for admitted patients, 90 per cent of admissions within 18 weeks). So that means we set our model up like this (we’ll show the data in data entry style this time):
|Data code||Data description||Value|
|FutPCWaiting1||Future percent waiting at time 1||50%|
|FutWaitTime1||Future waiting time for time-limited patients 1||11.1|
|TgtMaxWait||Future target waiting time||18|
|TgtMaxWaitPC||Future percentage within future target waiting time||90%|
|TgtMaxWaitType||Flag whether target max wait is flux or snapshot based||f|
With 50 per cent of admissions within 11.1 weeks, and 90 per cent of admissions within 18 weeks, our benchmark waiting list for an admitted pathway must not exceed 112 patients. That is 15 per cent smaller than before we introduced the median waiting time target, and that is the extra financial cost of the median target.
(To model this more precisely, it would be better to specify two levels of urgency, with the first one being the true clinical urgency of the service; if urgency rates are significant then the waiting list will need to be even smaller than this.)
7) Median RTT waiting time for non-admitted patients (target: 6.6 weeks)
Exactly the same process applies to the median for non-admitted patients, except that now 50 per cent are non-admitted within 6.6 weeks, and our target is specified as 95% within 18 weeks. Now our benchmark waiting list for non-admission must not exceed 97 patients, which is 24 per cent smaller than the well-managed non-admitted list when no median target was applied.
8) Median RTT waiting time for incomplete pathways (target: 7.2 weeks)
This last target is the trickiest of all. To be honest, we have not worked out a way of incorporating it directly into the model. Nor can we think of any purpose to this target that is not already achieved much better by the 95th centile for incomplete pathways.
However we have done some side calculations to work out whether, in a well-managed waiting list, this target would be more or less challenging than the “median + longwait” targets we have just considered. If it’s less challenging, that is good news because we know that, if we met the admitted and non-admitted targets above, then the median incomplete pathway would be met too. If it’s more challenging, then we need to work it out specially. So which is it?
Good news: it turns out to be less challenging, and that conclusion holds under all reasonable scenarios for surgical clinical priorities and for the management of expedited routine patients. That means we can neglect this target, knowing that in a well-managed list everything should turn out alright for our median incomplete pathways, so long as our waiting list is small enough for the other targets to be met.
So, in summary, this is how we should set up our planning models to achieve eight green boxes on our RAG ratings.
For admitted patient pathways, we should specify the level of clinical urgency in the casemix, and then add a second level of urgency so that only 50 per cent of patients remain on the list at 11.1 weeks. The waiting time target is 90 per cent within 18 weeks on a flux basis.
For non-admitted patient pathways, we specify the level of clinical urgency, and then our second level of urgency has 50 per cent of patients remaining at 6.6 weeks. The waiting time target is 95 per cent within 18 weeks on a flux basis (as opposed to a waiting list snapshot basis).
Given those inputs, the model will work out the activity, capacity and money needed to deliver the specified targets, provided we manage the waiting list well. If we achieve all that, then the other targets should simply fall into place, as they are all less demanding and would be achieved even with a larger waiting list.
In practice we would want to work out some other things too. Firstly, we might repeat the calculation without the median targets, just to show the extra costs that are pointlessly incurred in achieving a less-fair waiting list. Secondly, and particularly for the admitted patient pathway, we would want to model the pathway stages separately in order to work out capacity and money.
It’s been a long slog, but worth it. Now we know how to offer eight green boxes. Who knows, one day we might get a sensible target regime that means we don’t have to?
NICE launched eighteen pathways yesterday, covering everything from neonatal jaundice to dementia. If it’s your job to plan NHS capacity into the future, how should you respond when pathway changes are on their way?
The point of planning is to prepare for the future. You can’t predict everything that is going to happen, but you do your best, accounting for foreseeable changes like trend demand growth, efforts to cut waiting list backlogs, demographic drift, and foreseeable pathway changes. This is why your plans are better than assuming the status quo.
Pathway changes are often the most complicated, because they are usually not representative of the specialty and so all the performance averages (such as lengths of stay) have to be changed too. So you can’t simply deduct a quantum of demand and leave everything else the same; you need to do something a bit cleverer.
Don’t change the future – rewrite the past
The best way to model upcoming pathway changes is to rewrite the past, as if the new pathway had always been in effect. So if a particular HRG is going to be managed out-of-hospital in future, then you need to filter that HRG out of your past activity data before passing it to a query (or to Gooroo Planner) to extract the information you need about activity, length of stay, clinical urgency rates, seasonal demand profiles, and so on.
If your pathway change has more complex effects then you may not be able to capture them with simple database queries that filter patients in or out based on things like age, postcode or HRG. If this complexity is significant and needs to be modelled explicitly, then a more specialist simulation model such as Scenario Generator can be used to model the pathway flows, get an indication of the effect on waiting times, and work out the implications for capacity.
If you need to look at waiting times more thoroughly, then Scenario Generator can be used to generate the rewritten past data that you need, on the new pathway basis, before passing it to Gooroo Planner for the detailed waiting time, capacity and financial calculations.
be specific about how care will change
So when you’ve been slaving away at next year’s plans, and somebody pops up with a challenge about a pathway change, don’t mutter something about estimating the effect on demand trends. Instead, you can ask them to be specific about the characteristics of the patients affected and how their care will change. If they deliver the goods, you’ll know what to do: rewrite history, and then use that as the basis for your new plan.
The “Liberative” Government’s health reforms started life with a light and permissive vision of GP commissioning. But now they are mired in confusion. What happened? In short, the new vision collided with the old. Last week the Health Select Committee sided firmly with the old vision, calling for Consortia to be renamed as Commissioning Authorities with formal governance structures and stakeholder representation.
New vision or old, everybody wants commissioning to be done well. But what does commissioning mean, and how should it change?
In the conventional vision, commissioning starts with the carefully-assessed healthcare needs of your local population. Then you compare this against the services actually provided. Inevitably, you find plenty of areas where needs are not being met at all, or where provision could be improved, or where there is over-provision and ineffectiveness. Starting with the biggest mismatches, you work with other stakeholders to design new and better pathways, and then you seek providers to deliver them (or work with existing providers to improve things).
Conventionally, you manage “your” providers through the annual contracting process. You estimate the amount of activity to be done, and then apply the tariff price (if there is one) or negotiate a price (if there isn’t). You manage quality using Key Performance Indicators (KPIs). If quality falls short or activity is at variance with the contract volumes, then you apply the remedies specified in the contract.
So far, so familiar. But this is all office-based activity. What are the chances of it making a real difference to patients?
You hope to reach a position where need and provision roughly match. But your experience shows that anything you measure in healthcare displays huge and unexplained variations; if you do find a match between need and provision, it is only by chance. And if you achieve a match today, then it probably won’t match tomorrow. So trying to match need with provision is going to be highly inexact at best.
0.5% of the population consumed over 20% of acute spend
Patients also show great variety even within a single pathway, and the sickest patients usually have multiple conditions. The harder you try to tailor a pathway to a particular condition, the more you find there are exceptions to the rule. Do these exceptions matter? Yes, because they are your most expensive patients. Data from one PCT shows that a mere 0.5 per cent of the catchment population (about 1,000 people) accounted for over 20 per cent of acute expenditure. So good judgement by GPs trumps good pathway specification when it comes to handling the sheer variety of patients presenting.
What about quality? You hope that quality and performance can be managed with KPIs and contractual sanctions. But “quality” is too rich a concept to be described in even the most comprehensive list of KPIs. The harder you try to specify everything, the more you lock yourself into the status quo. Moreover, anything that isn’t in the KPIs is simply driven out: the effort of monitoring everything else in the contract takes over. So quality needs to managed through dialogue, not specification, and the organised concerns of GPs are a better guide to quality than words in a contract.
Even activity – the crunchiest of numbers – is hard to control in the standard contract. You can try to limit elective activity if the waiting list isn’t rising. You can try to throttle cost by using activity caps and restrictions on “procedures of limited clinical effectiveness”. However, most contractual changes need to be implemented with the agreement of the provider (which may not be forthcoming), and in any case tactics such as banning procedures tend to be blunt and limited instruments that displace or defer the problem rather than solving it.
Finally, awarding contracts only to selected providers (especially if the contracts specify guaranteed volumes) involves saying “no” to other potential providers. The argument is that this helps to control expenditure, but again there is a lot of hoping going on: you hope that, by restricting the availability of providers, you will reduce demand. As Don Giovanni said in a different context:
Wer nur einer getreu ist,
Begeht ein Unrecht an den andern;
If I am faithful to one,
I am unfaithful to all the others;
So the old vision of commissioning falls short on a number of counts. How could a new vision improve on it?
In commissioning, as with everything else in healthcare, real life happens in the consulting room not in the office. So better commissioning needs to happen in the consulting room too: if individual GPs manage their referrals and patient pathways well, then quality and budgets will follow. So the Consortium should focus its attention “downwards” to practices, rather than “upwards” to the Commissioning Board or “across” to providers.
That way, the life of a commissioner no longer revolves around the annual contracting round or the enforcement of KPIs. Instead, it revolves around helping GPs manage value, by:
- monitoring and escalating quality concerns raised by GPs;
- providing a “bank manager” function to GPs;
- peer-reviewing GP referral patterns and pooling risk;
- providing back-office, scheduling, and financial services to GPs;
- calling for new and better services, and helping prospective providers with their market research;
- ensuring that GPs are aware of the services and drugs available to them.
This moves decisively away from the adversarial contract-driven approach of the past. But one major step needs to be taken to make it work, a step that is not taken in the Health and Social Care Bill. Consortia need to be able to enforce budgetary limits at practice level, which is something that politicians (understandably) have tended to shy away from.
However, there is nothing to prevent GPs from opting to accept practice-level budgetary limits within their Consortium, or even formalising this rule in their Consortium’s constitution. After all, many GPs are pretty fed up with having their referrals interfered with, and their choice of providers restricted from on high, whenever PCTs are struggling to achieve their statutory duties because they cannot control demand.
So GPs and their Consortia are faced with a choice: genuine freedom to refer within a limited budget that they control; or a continuation of the imposed and inconsistent restrictions that face them now. What will they do? Perhaps the best outcome would be for different Consortia to make different choices. That would truly test the two visions of commissioning.
A clued-up 18-weeks manager put me on the spot recently. We manage patient bookings according to their position on the whole 18 week pathway, she said. How do you model that?
My first answer was the usual one: it’s best to model each stage of the pathway separately. That way you get systematic management and planning at each step, and the outpatient booking department isn’t tempted to pass on its waiting time problems for the inpatient department to solve later.
Ah, she said, but we’re a small Trust, and we just have one booking office for all stages of the pathway. What you say is fair enough if all patients follow the same pathway; but what if some have a diagnostic stage and some don’t? Then modelling each stage separately won’t work because, at the inpatient stage, the post-diagnostic patients are much closer to 18 weeks than the others.
Well, that was a tougher question, and I didn’t have an answer to hand. Multi-stage, multi-strand pathways would be tough to model properly (taking into account clinical priorities, cancellations, booking rules, etc) and I’m not aware of anyone having done it. But it’s a good question and it deserves an answer, and after thinking about it I think the answer is this.
The scenario we are talking about is:
Let’s start with the practicalities of managing patient bookings on this pathway. The outpatient stage is a genuine single-stage booking process, and is directly suitable for good booking techniques that achieve 100 per cent slot utilisation, shorter waits, protected clinical priorities, and minimised disruption.
Then at the diagnostic stage, patients can be added to the waiting list with their original referral date, and flagged if they suffered cancellation in outpatients. This ensures that those who have already waited longest are booked first, and that previously-cancelled patients receive preferential treatment (and have capacity set aside for them). Apart from that, the diagnostic stage can also be managed as a straightforward single-stage booking process.
The inpatient stage is more complex, because the major pathway split at the diagnostic stage means there are two quite distinct classes of routine patient, with quite different waiting time histories. Nevertheless, if the inpatient stage is managed using a partial booking system and patients are added to the waiting list with their original referral dates, then I think it can also be managed as a straightforward single-stage process.
Under a partial booking system, appointments are only issued a certain number of weeks ahead, so those patients who bypassed the diagnostic stage will wait a few weeks before being given their appointments, whereas patients who had a diagnostic will be given appointments soon after being added to the inpatient list. This restores evenness to the two halves of the pathway, and allows the 18 week target to be achieved across both parts of the pathway, with the largest possible total waiting list.
What about planning? When it comes to planning future activity to achieve the 18 week operating standard, the outpatient stage can be modelled as a single stage, as above. After that point, if you do want to model the split pathway, I think it makes sense to split it (for planning purposes only) all the way to the end, so that it looks like this:
So, for example, your planning might involve working out the activity, capacity and cost required to achieve 90 per cent treated within:
- 6 weeks, for outpatients
- 6 weeks, for diagnostics
- 6 weeks, for post-diagnostic inpatients
- 12 weeks, for non-diagnostic inpatients
That way, you are planning to achieve the overall 18-week target, but still taking advantage of the longer waits available on the non-diagnostic inpatient path.
Incidentally, whilst it is fine to split the pathway like this for planning purposes, it is usually better to avoid splitting an operational booking system. The differences in waiting times between one consultant and another are bad enough, without adding any further splits.
Why is forward planning such a slog in the NHS? Fundamentally, all we are doing is this:
- Take what happened last year
- Add a bit
- Adjust for any specific pathway and demand management changes
- Apply some agreed performance assumptions using well-known equations
- Output the results as activity, capacity and money.
- Profile it all into a monthly plan.
The first thing that makes it difficult is the sheer volume of numbers involved. Your plans need to break everything down at least by specialty (treatment function code), or by HRG chapter, or even by HRG. Then you need to separate out emergency spells, elective spells, A&E, first outpatients, etc. And you need to split it by commissioner or provider, and possibly by provider site as well. All in all, you are looking at dozens of service lines at least, and quite possibly hundreds.
The second problem is that different kinds of data come from different places in different formats (including notes of meetings and scraps of paper). Some of the performance assumptions are broad-brush, some are detailed, and some are exceptions to a general rule. They somehow need knitting together into a single planning model. And they keep changing: time goes by and more recent activity data becomes available; performance assumptions and pathways are negotiated and amended; new guidance comes down from the Department of Health (and in future the Commissioning Board).
The third problem is that some of the historical data is prone to errors: activity is not completely or correctly coded, there are delays in recording events on the system, there are duplicates and omissions, and changing customs and practices cause coding drift and other systematic error. To some extent, these errors can be detected and corrected automatically; in many cases they can’t.
The fourth problem is that well-known equations do not exist for some of the workings. Waiting time standards of the form “90% of patients must be treated within 8 weeks” have historically been a high-profile example; the standard is easy to state, but to model it properly you need to take in the effect of clinical urgency, cancellations, whether you are running a fully-booked or partially-booked system, and other factors. If you try to simplify the problem by assuming that current practice reflects how things ought to be, then you are ignoring (often substantial) opportunities to improve.
There are similar problems with monthly profiling: you can profile non-elective work based on historical patterns; but what agreed methods are there for profiling inpatient elective work around peaks in non-elective demand, when the 18-week waiting time limit means that you can’t slow down surgery very much over the winter?
The fifth problem is that you probably have the wrong tools for the job. The suggested tool for presenting your plans is usually a spreadsheet, and (despite the well-known problems with spreadsheet errors, and their limitations when it comes to iterative calculations) they are the cultural default.
How much does this matter? Aren’t these plans just shelfware? Feeding the beast, and all that?
Actually, no. Although your painstakingly-crafted plans may end up on the shelf afterwards, there are two good reasons why the effort is important:
- The planning process causes lots of conversations to happen that do change the way healthcare is delivered, and the numbers make sure those conversations are tough enough.
- The financial squeeze is now adding urgency: PCTs will not be allowed to create a legacy of debt for future GP consortia, continually-rising demand is no longer affordable, and hospitals have a capacity overhang from the boom years… and so back to point 1 above.
It is natural when planning to focus on the correctness of the calculations. The complexity of the process can make this all-consuming.
But it is equally important to make sure that everyone else involved can keep track of the performance and pathway assumptions being used. Why? Because when clinicians and managers make changes to healthcare in real life, they are implementing changes to these assumptions.
Of course the calculations must be right, and the “bottom line” results are crucial in showing how much further negotiation will be needed. But it is also worth paying attention to the presentation of those key assumptions. If other people can easily see what they are, what they mean, and how they change during negotiations, then better decisions will be made about them, and the planning process will be a more powerful force for improvement in the real world.
We talk about the demand for healthcare all the time, but sometimes the talk is loose. If you hang around NHS offices for long enough you might hear statements like:
- Demand can’t be that high – the contract doesn’t provide that much.
- Last year we did 1,000, add 3% growth in demand, so that makes 1,030 next year.
- I’ve got hips coming out of my ears.
…and so on.
This kind of talk confuses demand and activity. More accurately, we might say things like this:
- Demand is likely to grow, but we don’t know exactly why or by how much.
- The waiting list is the accumulated mismatch between demand and activity.
- If we want to control waiting lists, we have to at least keep up with demand.
- Historic demand is activity plus the growth in the waiting list (adjusted for removals).
This is the sort of thing that is built into good planning models, and it allows us to make other useful distinctions, like:
- recurring activity is the activity required to keep up with demand; and
- non-recurring activity is everything else, and it brings down the waiting list.
So far so good. But behind all this, we are making a big assumption that won’t spring out of a planning model: that all our “demand” represents real work that we need to do. For instance:
- a patient is seen in outpatients by the wrong consultant and has to be rebooked with the right one; is the first appointment “demand”?
- a patient is referred for unnecessary follow-up by a junior who is not confident enough to discharge; is this follow-up “demand”?
- a patient is seen in outpatients, but the necessary test results aren’t ready so they have to be rebooked; is this “demand”?
- a one-stop clinic replaces an outpatient-diagnostic-outpatient sequence; does demand fall by two-thirds?
And on the inpatient side, are any of the following “demand”?
- a patient remains in an acute bed for a couple of days longer than necessary, waiting for a ward round and then drugs;
- a patient arrives for surgery, but is sent home and rebooked because they had toast for breakfast;
- a patient is admitted to avoid breaching the 4 hour A&E target, even though they don’t meet any AEP criteria.
These examples of “demand” are not caused by unmet healthcare needs in the population. Rather, they are artefacts of the system. How much of our total demand is created like this? 3 per cent? 10 per cent? 30 per cent? Do we have the faintest idea?
If it’s a sizeable proportion, and I suspect it probably is, then reducing it could substantially offset the (apparently) growing genuine demand for healthcare. Which would be handy at a time of near-frozen real-terms funding.
Any sensible information professional keeps different kinds of data in different places. So activity is all in the main database, but things like performance assumptions and pathway changes are kept in different tables. Or you get the data on the fly by asking colleagues. You certainly don’t keep it all together in a single table.
But when it comes to planning for next year, you do need to stitch it all together somehow. In a home-grown spreadsheet model (and in most commercial planning software) this is a fiddly and error-prone affair. We wanted Gooroo Planner to change all that, and in this post we’re going to explain how it works.
You start using Gooroo Planner by flooding the system with some very basic historical activity data. You can use patient-level data straight from your main database (this is anonymised and therefore not confidential), or load up some simple statistical data if you prefer. The important thing is to import this basic data at the right level of detail, so if your final report needs to be at HRG and specialty level, then you need to load the initial data by HRG and specialty; but if just specialty-level will do, then do it by specialty.
When you’ve created your basic dataset, you can start chucking in everything else you need: demand assumptions, average lengths of stay, waiting list sizes, whatever you like. This is the clever bit, and it uses the Update button on the main Planner workflow. What makes it so powerful is that you can throw in data at any level of detail you like, and Update will automatically adapt it to match your main dataset. So you can use assumptions that are too broad, or too detailed, or even a mixture of the two.
The Update button will do all of this automatically, without you having to think about it. But out of interest, let’s take a look under the bonnet and see what Planner is doing inside.
For example, let’s say you want your plan to be broken down by organisation, specialty and activity type. So you load up some basic activity data at this level of detail. At this stage it is best to use codes instead of names, to ensure accurate data matching (don’t worry: these codes can be turned automatically into plain English when Planner builds your final report). With your basic dataset in place, you now have the framework that all subsequent data will hang from.
In CSV form, a simple example of this basic dataset, with a header and three services, might look like this. The headers in the first line are the ones used internally by Planner (and the system will give you plenty of help with those); the first three headers describe the organisation, specialty, and activity type; and the last two describe waiting list activity and non-waiting list activity.
Now let’s say you want to assume that non-elective demand is going to grow at 3% a year, elective at -1%, and daycase at 2%. In the past you would have had to go through all the services in your model (and there might be hundreds of them) editing in all the right numbers. Not in Gooroo Planner: just use the Update button to load up those assumptions exactly as they stand, and it will automatically detect the format and apply the right numbers to the right services. So for the above example you would Update with the following CSV file:
and that would turn your dataset (which only contains elective data) into this:
Now let’s try something much more complicated. Let’s say you have some care pathway changes to add in, and they are expressed as reductions in demand, broken down by HRG, specialty, and activity type. So now you have some extra detail (HRG) and some missing detail as well (organisation). You can throw it in just as it stands. Planner will automatically detect what’s going on, and adapt the data so that it fits.
For example, let’s say that two of your pathway assumptions are:
- a 10 spell reduction for HRG AA111, specialty 100, activity type Elec; and
- a 20 spell reduction for HRG BB222, specialty 100, activity type Elec.
You would Update using the following CSV file:
First, Update eliminates the HRG data, leaving:
Then it spreads this pathway change across the organisation codes in our main dataset. So now our main dataset looks like this:
As you can see, the extra demand has been apportioned pro rata with the activity at each organisation.
Different kinds of data need different rules when it comes to apportioning them to different services in Update. So in the above examples, demand growth was simply copied across each service, but the pathway change had to be distributed between them. All those different rules are coded right into the system so that it all works out correctly.
It goes without saying that such powerful data handling was a real challenge to implement as a fully-automated Update function. But we went ahead and did it anyway, so that you don’t have to. All you need to do is hit the Update button, and Planner does the rest.
Of course, if you want to really want to roll your sleeves up and pick through your datasets manually, you can. At any time, you can just download your dataset, edit it using your favourite database or spreadsheet software, and then load it back up again. With Gooroo Planner, you are always in control.
A recent study has just fallen into my lap (under the Chatham House rule). It is the initial findings of a casenotes review of over 100 short-stay (zero- and one-day) emergency admissions at an English acute hospital.
For me the most interesting highlights of these short-stay admissions were:
- Only 33% were appropriate (and only 22% of those from A&E) under the AEP
- For 80%, the grade of doctor making the decision to admit was “unclear” or “not documented”
- 50% of admissions from A&E were in the last half-hour before the 4 hour target was reached
Given that short-stay emergency admissions are common and rising, this presents a huge opportunity to GP Commissioners. In the short term the combination of unclear records, inappropriate admissions, and the absence of previous GP involvement in the patient all point to opportunities to challenge the hospital’s claims for payment. In the longer term, it presents opportunities for GP triaging of A&E attenders, and the establishment of primary and community alternative pathways.
Isn’t that a bit rough on the hospital? Not really. Not just because the inappropriate admissions are, well, inappropriate. But also because there may be darker things going on underneath these headline figures.
The fact that half of admissions from A&E are made in a scramble, just before the 4 hour target is breached, ties in with national figures and offers a clue about why so many admissions are inappropriate. According to the Information Centre:
Of those patients discharged [from A&E] within the final 10 minutes of the 4 hour wait target, the highest proportion (64.7 per cent) were recorded as ‘Admitted / became a lodged patient’.
So late admissions and inappropriate admissions are linked together. Which raises an intriguing question: is the lack of documentation about the admitting doctor also part of the picture? With only minutes to go before the target is breached, perhaps doctors are in such a rush to admit that the notes are left unclear? Or worse, are some patients being admitted “administratively”, by a non-doctor, just to achieve the target (as is sometimes alleged by NHS staff on comment boards, e.g. here, here, and here)?
The problem at the moment is that hospitals are heavily incentivised to behave like this. Such compromise corrodes the soul. If GP Commissioners challenged payments on inappropriate admissions, so that they became a cost to the hospital instead a benefit, then the world could start to turn the right way up again.
The King’s Fund has just published a new report on referral management, and delivered a cold shower to the referral management centres that some PCTs have created to weed and redirect GP referrals. It concludes:
the greater the degree of intervention, the greater the likelihood that the referral management approach does not present value for money.
Or, as one triaging GP put it:
It gets back to individuals making decisions on other people’s decisions.
Not that everything is rosy in the world of GP referrals. When GPs were allowed to review their colleagues’ referral letters they were not shy about saying what they thought:
When we first started, some of the referrals were absolutely appalling, dreadful. Two lines, referrals of two lines, please see this patient with headaches, and we automatically rejected all of those…
I mean, I just couldn’t believe my eyes initially, the quality of referrals was just dire
Well, criticisms are always fun to read. But what did work? In the words of the King’s Fund:
A referral management strategy built around peer review and audit, supported by consultant feedback, with clear referral criteria and evidence-based guidelines is most likely to be both cost- and clinically-effective. …
Practice-based commissioning clusters and their successors, the GP commissioning consortia, are the obvious conduit and driver for peer review and audit.
In other words, don’t second-guess the referring GPs; but do work at a doctor-to-doctor level on improving their referral skills. This makes perfect sense. At the time of referral, nobody knows the patient’s condition better than the referring GP. If some GPs aren’t very good at referrals, then the problem is unlikely to be solved by inserting a layer of second-guessers (who have only the inadequate referral letters to base their decisions on). As the King’s Fund says:
any intervention to manage referrals cannot look at the referral in isolation but needs to understand the context in which it is being made
So full marks for the King’s Fund report, then? Very nearly. My slight disagreement is when they say:
any referral management strategy needs to include a robust means of managing the inherent risks at the point when clinical responsibility for a patient is handed over from one clinician to another (so-called clinical hand-offs)
I would argue that they accept the concept of the “clinical hand-off” too readily. Referrals should not be fire-and-forget, rather the GP should remain available as the patient’s advisor after the referral has been made. After all, patients must give their informed consent to every step of their treatment, and both the consultant and the GP have a role to play in informing them.