Measuring delivery reliability: be aware of the pitfalls!
Delivery reliability; there hardly is a company that is not measuring this in one way or another. After all, we all want and need to be customer-oriented, right? Measuring our delivery reliability should enable us to check the achievement of our objectives and taking the proper improvement actions if required.
But this does mean that our measurement should be done thoughtfully. It should represent what we want to achieve and lead to the right, relevant actions. Unfortunately, we don’t often even exactly know what it is what we measure let alone whether this reflects well what our customer is expecting from us. Our measurement may even lead to totally undesirable behavior if we don’t watch out.
Measuring “on time”
In most cases, delivery reliability is expressed as a percentage: “Our delivery reliability this month was 93%”. Interestingly, most people in the organization, including top managers, do not even know how this percentage is determined.
What is the unit based upon which the measurement takes place, for instance? Are we talking orders, order lines, production order lines, tons or pieces? And how do you deal with partial deliveries? Do you apply a tolerance?
And what, in fact, is “reliable”? Is too early also OK? And do you measure based upon a weekly basis or a daily basis? Or even a specific time window? Do you even know how your organization actually records the reference value for “on time”? Do you take the original customer requested date, do you negotiate or take your own promise date? Or even what you negotiated and agreed to at the end when it turned out you couldn’t respect your first promise? And how do you even know your product actually arrived at the customer? And when you are not “on-time”, do you then record the miss in the week you should have delivered or in the week that you finally did deliver?
You don’t know, you say? Try and ask these questions in any organization and chances are you are not the only one not exactly knowing what it is they are actually measuring…
Many don’t even know what it is they are actually measuring…
And also when we look at how companies try to improve their delivery reliability, the way we have set up our measure could hamper our improvement ambitions.
Just think about when we measure delivery reliability as a percentage. Imagine a product is already late. What is the incentive that could possibly come from a measurement based upon the “on time” percentage? We’ re already late anyway, right? Better to prioritize another product on the shop floor and to let it pass so that we are sure that one still will arrive in time. Even if that leads to delivering the other product, that’s already late, even later. Right? And definitely when you find out there’s even a bonus related to the “on time” delivery percentage! An unrealistic scenario I hear you say? If only it would be…
The above leads to the “gaming of the system”, whereby we’re more focused on achieving a certain number by playing magic tricks and creatively interpreting our internal rules than that we’re focused on what we actually are trying to do.
I hope it is clear we need to be thoughtful about our measurements. Do we correctly represent what our customer expects from us? In every situation that may exist? For instance, does our customer really accept earlies? I wouldn’t, as it transfers the responsibility of parts (and cash) far too early, it leads to inventory and waste of space (or even direct cost when an external warehouse is used). And does our measurement system help in creating the right behavior and attitude? In short, be precise and be strict. The goal is not “to be satisfied”, but “to be able to get better”.
The goal is not “to be satisfied”, but “to be able to get better”.
And why measure against our own promise? Taking that point of view only measures if we do what we ourselves promised, but this doesn’t necessarily correlate with what the customer actually wanted. Is that what we want? Sure, measuring against request can hurt. But again, what is it you wanted to achieve?
And when we want to improve ourselves, why do we in fact measure that what went well? Why not measure those things from which we can still learn? So why do we speak of delivery “reliability” and not “unreliability”? I hardly come across this viewpoint in industry, other than at Toyota, their suppliers and some other Lean oriented companies.
And is the customer really interested in an order, order line or delivery? Or is he more interested to get your parts “just-in-time” to his production process? From that point of view, does it make a difference to your customer that you deliver a line with 2 parts late, or a line with 10 parts? Think about what the order line will be like in one piece flow… Still, also here, most companies measure based upon order lines. Except Toyota and some others that measure based upon parts…
And then we still have the question whether one day late is the same as a week late. We typically measure a percentage, but shouldn’t we measure the difference between the requested date and the actual date instead? GE measures the variation in this gap between request and actual and considers this variation far more important than the percentage. How many other companies apply this in their “delivery reliability” measurement system?
If you’d like to evaluate your demand patterns and reliability in this way, I gladly refer you to THE JIT COMPANY’s Demand Analysis course and tools.
Many measuring pitfalls
As we have seen, there are quite some pitfalls in measuring our performance in delivering our customers on time. Improving our service in this aspect commences with the proper detection of problems. And detecting problems starts with the right measurement. Otherwise we could be surprised by dissatisfied customers and undesirable behavior… Just when — through our current measurement — we thought we were doing quite well…