Posts Tagged ‘spreadsheet’
The whole point of developing a plan for the coming financial year is to resolve, at the outset, the tensions that are pulling your organisation in opposite directions.
So on the one hand you have demand to keep up with, and 18-week waiting times to sustain. On the other hand you have limited money, capacity and staff to do it with. Somehow your plan needs to reconcile those opposing forces.
What if it doesn’t?
Then the problem is effectively handed on for operational managers to resolve. They end up in meetings where the first half is about laying on costly extra capacity to clear the backlog that’s building up, and the second half is about how they need to slow down activity because they’re “over-performing” and the money’s running out.
That agenda, of course, doesn’t solve anything, so the problems fester. In the autumn, following a series of difficult meetings about demand management, some extra money is somehow identified to patch over some of the gaps. But everyone knows that nothing fundamental has really changed.
If life without a proper plan is so unappealing, why do so many NHS organisations begin the financial year without one?
Let’s try this scenario for size: The planning process begins in good time, but it quickly gets complicated. A lot of people need to be involved: general managers, finance, contracting, information, and that’s just from the hospital side. Different people approach the task in different ways, so there is a mix of methods and not all of them are valid. New assumptions are constantly thrown in to try and close the gap, and the model gets ever more complex. A planning analyst gamely tries to hold it all together in a spreadsheet, but it’s massive and people tire of looking at subsequent versions of it. The detail becomes unwieldy and time is running out. Something high-level has to be hammered out at the last minute, just to make the money balance. The detail is then retrofitted pro-rata and the “plan” signed-off.
In short, inclusive bottom-up planning is overwhelmed by complexity, and a top-down settlement has to be imposed instead. If complexity is the enemy, how could the process be simplified and streamlined, so that the bottom-up process can succeed?
Here is how Gooroo Planner solves the problem:
Firstly, we recognise that much of the data going in is a matter of simple historical record (recent activity levels, for instance). These facts can be agreed early on, and there is no need to discuss them further.
Secondly, we’ve taken all those complex calculations and developed them into a single model, based on principles that are widely-accepted across the NHS, fair to all sides, and transparent. So precious negotiating time is not taken up with detailed discussions about method. The calculations cover the whole of the planning period, and also break the plan down week by week so you can meet your objectives continuously through the seasons, and keep your plan up-to-date with events.
Thirdly, all the performance, demand and activity assumptions are laid out clearly and openly for discussion. Ultimately the key to reaching a settlement lies in successfully negotiating these assumptions, so that resources can be released from some areas to relieve pressures in others. So we’ve made it easy to test different scenarios, either item-by-item or by throwing in whole tables of alternatives.
Finally, we provide collaboration tools to get away from those giant emailed spreadsheets. Managed online collaboration means that participants can all see (and where necessary work on) the same plan, in real time, with full audit trails of any changes.
If you’d like to work that way, either to revise your plans for this year or start getting ready for next winter, then get in touch and we will be happy to visit and show you more. Just email firstname.lastname@example.org for a free on-site demo.
Why is forward planning such a slog in the NHS? Fundamentally, all we are doing is this:
- Take what happened last year
- Add a bit
- Adjust for any specific pathway and demand management changes
- Apply some agreed performance assumptions using well-known equations
- Output the results as activity, capacity and money.
- Profile it all into a monthly plan.
The first thing that makes it difficult is the sheer volume of numbers involved. Your plans need to break everything down at least by specialty (treatment function code), or by HRG chapter, or even by HRG. Then you need to separate out emergency spells, elective spells, A&E, first outpatients, etc. And you need to split it by commissioner or provider, and possibly by provider site as well. All in all, you are looking at dozens of service lines at least, and quite possibly hundreds.
The second problem is that different kinds of data come from different places in different formats (including notes of meetings and scraps of paper). Some of the performance assumptions are broad-brush, some are detailed, and some are exceptions to a general rule. They somehow need knitting together into a single planning model. And they keep changing: time goes by and more recent activity data becomes available; performance assumptions and pathways are negotiated and amended; new guidance comes down from the Department of Health (and in future the Commissioning Board).
The third problem is that some of the historical data is prone to errors: activity is not completely or correctly coded, there are delays in recording events on the system, there are duplicates and omissions, and changing customs and practices cause coding drift and other systematic error. To some extent, these errors can be detected and corrected automatically; in many cases they can’t.
The fourth problem is that well-known equations do not exist for some of the workings. Waiting time standards of the form “90% of patients must be treated within 8 weeks” have historically been a high-profile example; the standard is easy to state, but to model it properly you need to take in the effect of clinical urgency, cancellations, whether you are running a fully-booked or partially-booked system, and other factors. If you try to simplify the problem by assuming that current practice reflects how things ought to be, then you are ignoring (often substantial) opportunities to improve.
There are similar problems with monthly profiling: you can profile non-elective work based on historical patterns; but what agreed methods are there for profiling inpatient elective work around peaks in non-elective demand, when the 18-week waiting time limit means that you can’t slow down surgery very much over the winter?
The fifth problem is that you probably have the wrong tools for the job. The suggested tool for presenting your plans is usually a spreadsheet, and (despite the well-known problems with spreadsheet errors, and their limitations when it comes to iterative calculations) they are the cultural default.
How much does this matter? Aren’t these plans just shelfware? Feeding the beast, and all that?
Actually, no. Although your painstakingly-crafted plans may end up on the shelf afterwards, there are two good reasons why the effort is important:
- The planning process causes lots of conversations to happen that do change the way healthcare is delivered, and the numbers make sure those conversations are tough enough.
- The financial squeeze is now adding urgency: PCTs will not be allowed to create a legacy of debt for future GP consortia, continually-rising demand is no longer affordable, and hospitals have a capacity overhang from the boom years… and so back to point 1 above.
It is natural when planning to focus on the correctness of the calculations. The complexity of the process can make this all-consuming.
But it is equally important to make sure that everyone else involved can keep track of the performance and pathway assumptions being used. Why? Because when clinicians and managers make changes to healthcare in real life, they are implementing changes to these assumptions.
Of course the calculations must be right, and the “bottom line” results are crucial in showing how much further negotiation will be needed. But it is also worth paying attention to the presentation of those key assumptions. If other people can easily see what they are, what they mean, and how they change during negotiations, then better decisions will be made about them, and the planning process will be a more powerful force for improvement in the real world.
Garbage in, garbage out.
When you’re planning for next financial year, you don’t want your modelling to be shot through with data errors. But neither do you want to have to pick through your data, line by line, looking for errors and fixing them manually (and perhaps inconsistently).
So how can you detect and fix common data errors automatically? Naturally, it depends on what’s wrong, and the difficulties comes in several flavours.
The data you get directly from your activity database is likely to be pretty complete (accuracy is a different matter), so that takes care of completeness for activity counts, actual lengths of stay, urgency rates (which are essential for calculating waiting times), and so on. You are more likely to find gaps in data that comes from elsewhere, such as:
- demand growth assumptions
- waiting time targets
- waiting list sizes
- removal rates
- bed occupancy and bed utilisation assumptions
- bed, theatre and clinic performance assumptions
The important thing is to be explicit about the assumptions you want to make when any of these data items are missing. In many cases zero will be an acceptable default when data is missing, but sometimes it won’t and it’s dangerous to assume.
For instance if you know the waiting list size at the start, but not the end, of your historical data period, then zero would not be a safe default because your model would then be based on a rapidly shrinking list size (and therefore a high level of historical non-recurring activity). So it would be more sensible to assume the waiting list had remained a constant size, and populate the missing end list by copying the start list. Or vice versa, if it’s the start list size that’s missing.
For demand growth (or waiting time targets) you may have a standard set of assumptions, such as 3% growth for non-electives (or 90% admitted within 8 weeks), in which case you just need to ensure that your standard assumption is used wherever it is needed (but without over-writing any exceptions).
For removals, zero is often a good-enough default because any systematic errors in the removal rate should be second-order in a well-constructed demand calculation.
Capacity performance figures are proportional to the capacity being calculated, and so it is important to get these numbers right. However it is common for very broad assumptions to be made without proper consideration. For instance, 85% bed occupancy is often assumed to be a suitable buffer against fluctuations, but for this figure to be arbitrarily raised to 95% when the calculations show that a bigger hospital would be needed! This is a big subject in its own right, but the broad message is that Trusts would benefit from closer attendance to capacity assumptions.
Even if the raw data in a low-volume service is accurate, it can still be misleading for modelling purposes because the data is “noisy”. For instance, in a service that is provided only occasionally by one consultant, demand might fluctuate wildly:
- 10 in 2007-08
- 5 in 2008-09
- 18 in 2009-10
You can see the danger. If you used only the years from 2008-09 to 2009-10 when calculating the trend, you might conclude that demand was growing at 2,600% per year and so future demand would be:
- 65 in 2010-11
- 233 in 2011-12
- 840 in 2012-13
Which would be ridiculous, but not necessarily easy to spot if you’re crunching dozens (or even hundreds) of services automatically in a giant spreadsheet.
Instead, you need to cap demand growth within sensible limits. It is also sensible to avoid conducting detailed waiting time modelling on very small services (again because of noisy data leading you astray), and instead assume simply that demand must be met.
Additions data is used in waiting time calculations, but this data source is notoriously unreliable. The standard check for data quality (in the absence of suspensions and deferrals) is the reconciliation formula:
start list + additions = end list + activity + removals
So in most cases you can cross-check additions data against the other data, and if it lies outside a defined tolerance then you can cap it. So far, so simple.
However there is a complication when it comes to admitted patients. Daycases who stay overnight automatically become inpatients. So when you use the formula, you might find that it is out of balance because some of the daycase additions ended up as inpatient activity.
The solution is to consider daycases and inpatients together when detecting errors in the additions figures for a given specialty. If the reconciliation works out well in total, then the separate additions figures do not need adjusting.
Waiting list data
Sometimes there is a delay between receiving an elective referral (or making a decision to admit) and logging that patient onto the IT system as an addition to the outpatient (or inpatient/daycase) waiting list. So if you were to extract yesterday’s waiting list you would miss any patients that haven’t been keyed-in yet.
For planning purposes, this problem can usually be avoided just by using older data. When planning for next financial year, some months before it even starts, it makes little difference whether you rely on data up to the end of last month or the month before.
Gooroo Planner has comprehensive error detection and correction built-in, under the control of settings tables that are pre-set with defaults and editable by the user. This ensures that automated error handling is performed consistently and under user control.
Type “spreadsheet error” into Google and a litany of woes will unfurl. Spreadsheet error is serious stuff: so serious that it’s a research field in its own right, and even has an international organisation devoted to it. The leading researcher in the field, a fellow called Raymond Panko, concludes:
All in all, the research done to date in spreadsheet development presents a very disturbing picture. Every study that has attempted to measure errors, without exception, has found them at rates that would be unacceptable in any organization. … With such high cell error rates, most large spreadsheets will have multiple errors, and even relatively small “scratch pad” spreadsheets will have a significant probability of error.
Surely any serious errors would be picked up before they did lasting damage? That’s what everybody hopes, but unfortunately the answer is: sometimes. And sometimes not. At the height of the banking crisis, for instance, Barclays Capital accidentally bought 179 trading contracts they didn’t want from the collapsed Lehman Brothers, all because of a spreadsheet formatting error. As one study of spreadsheet errors put it:
We draw two fundamental conclusions… First, it is clear that spreadsheet errors sometimes lead to major losses and/or bad decisions in practice. Indeed, we heard about managers losing their jobs because of inadequate spreadsheet quality control. Second, many senior decision makers whose organizations produce erroneous spreadsheets do not report serious losses or bad decisions stemming from those flawed spreadsheets. Hence, it seems in no way inevitable that errors in spreadsheets that inform decisions automatically lead to bad decisions.
But then again, it is in no way inevitable that they won’t. (Unless spreadsheets totally lack influence in your organisation, in which case why create them?)
Of course spreadsheets have their place. You use them all the time, and so do I, because they’re easy, flexible, and great for doing things “on the fly”. But a useful spreadsheet also has a tendency to grow like Topsy, until you start to worry that you’re not entirely sure exactly how it works, keep finding mistakes in it, notice “odd” results that don’t reconcile, and find that you’re spending inordinate amounts of time on it with relatively little to show for your efforts.
Or, to get down to the nitty gritty, you might remember times when:
- a formula was “temporarily” changed to a number, but not changed back again;
- a formula was changed, but the change was not copied across to other cells;
- an error was made when typing in data from a different source;
- cells were linked to an external table, but some of the links were misaligned.
We’ve all been there.
Once a spreadsheet becomes complex, or needs to be flooded with data from a database, or needs to run very similar calculations over and over again with different numbers, then it probably shouldn’t be a spreadsheet any more. NHS planning spreadsheets will typically tick all those boxes.