Measuring delivery reliability: be aware of the pitfalls!

by Sep 27, 2016Just-in-Time in Practice, Just-in-Time Manufacturing

Delivery reliability; there hardly is a company that is not measuring this in one way or another. After all, we all want and need to be customer-oriented, right? Measuring our delivery reliability should enable us to check the achievement of our objectives and taking the proper improvement actions if required.

But this does mean that our measurement should be done thoughtfully. It should represent what we want to achieve and lead to the right, relevant actions. Unfortunately, we don’t often even exactly know what it is what we measure let alone whether this reflects well what our customer is expecting from us. Our measurement may even lead to totally undesirable behavior if we don’t watch out.

Measuring “on time”

In most cases, delivery reliability is expressed as a percentage: “Our delivery reliability this month was 93%”. Interestingly, most people in the organization, including top managers, do not even know how this percentage is determined.

What is the unit based upon which the measurement takes place, for instance? Are we talking orders, order lines, production order lines, tons or pieces? And how do you deal with partial deliveries? Do you apply a tolerance?

And what, in fact, is “reliable”? Is too early also OK? And do you measure based upon a weekly basis or a daily basis? Or even a specific time window? Do you even know how your organization actually records the reference value for “on time”? Do you take the original customer requested date, do you negotiate or take your own promise date? Or even what you negotiated and agreed to at the end when it turned out you couldn’t respect your first promise? And how do you even know your product actually arrived at the customer? And when you are not “on-time”, do you then record the miss in the week you should have delivered or in the week that you finally did deliver?

You don’t know, you say? Try and ask these questions in any organization and chances are you are not the only one not exactly knowing what it is they are actually measuring…

Many don’t even know what it is they are actually measuring…

Improving reliability

And also when we look at how companies try to improve their delivery reliability, the way we have set up our measure could hamper our improvement ambitions.

Just think about when we measure delivery reliability as a percentage. Imagine a product is already late. What is the incentive that could possibly come from a measurement based upon the “on time” percentage? We’ re already late anyway, right? Better to prioritize another product on the shop floor and to let it pass so that we are sure that one still will arrive in time. Even if that leads to delivering the other product, that’s already late, even later. Right? And definitely when you find out there’s even a bonus related to the “on time” delivery percentage! An unrealistic scenario I hear you say? If only it would be…

The above leads to the “gaming of the system”, whereby we’re more focused on achieving a certain number by playing magic tricks and creatively interpreting our internal rules than that we’re focused on what we actually are trying to do.

I hope it is clear we need to be thoughtful about our measurements. Do we correctly represent what our customer expects from us? In every situation that may exist? For instance, does our customer really accept earlies? I wouldn’t, as it transfers the responsibility of parts (and cash) far too early, it leads to inventory and waste of space (or even direct cost when an external warehouse is used). And does our measurement system help in creating the right behavior and attitude? In short, be precise and be strict. The goal is not “to be satisfied”, but “to be able to get better”. 

The goal is not “to be satisfied”, but “to be able to get better”.

And why measure against our own promise? Taking that point of view only measures if we do what we ourselves promised, but this doesn’t necessarily correlate with what the customer actually wanted. Is that what we want? Sure, measuring against request can hurt. But again, what is it you wanted to achieve?

And when we want to improve ourselves, why do we in fact measure that what went well? Why not measure those things from which we can still learn? So why do we speak of delivery “reliability” and not “unreliability”? I hardly come across this viewpoint in industry, other than at Toyota, their suppliers and some other Lean oriented companies.

And is the customer really interested in an order, order line or delivery? Or is he more interested to get your parts “just-in-time” to his production process? From that point of view, does it make a difference to your customer that you deliver a line with 2 parts late, or a line with 10 parts? Think about what the order line will be like in one piece flow… Still, also here, most companies measure based upon order lines. Except Toyota and some others that measure based upon parts…

And then we still have the question whether one day late is the same as a week late. We typically measure a percentage, but shouldn’t we measure the difference between the requested date and the actual date instead? GE measures the variation in this gap between request and actual and considers this variation far more important than the percentage. How many other companies apply this in their “delivery reliability” measurement system?

If you’d like to evaluate your demand patterns and reliability in this way, I gladly refer you to THE JIT COMPANY’s Demand Analysis course and tools.

Many measuring pitfalls

As we have seen, there are quite some pitfalls in measuring our performance in delivering our customers on time. Improving our service in this aspect commences with the proper detection of problems. And detecting problems starts with the right measurement. Otherwise we could be surprised by dissatisfied customers and undesirable behavior… Just when — through our current measurement — we thought we were doing quite well…

Featured Insights

Why your Kanban system will probably fail

Kanban implementations are often disappointing and short lived. Companies are promised better service, lower inventories, and level demand. But more often the system is...

A definition of JIT (Just-in-Time) through its 4 distinctive characteristics

What is a company, called THE JIT COMPANY, without its own proper post on the definition of JIT or Just-in-Time? Particularly in a time in which JIT is often seen as an...

How Toyota’s Just-in-Time builds financial resilience

Meanwhile, we have all been bombarded with posts and articles that are quick to expose the Just-in-Time and Lean approach to management as one of the major causes of...

Resilient Just-in-Time: an oxymoron?

And again, we have a crisis on our hands that raises questions about just-in-time or JIT. Currently, the measures to restrain the coronavirus have led to Chinese...

Why ABC-analysis is inadequate for your supply chain

When asked to analyze demand, most probably turn to what is known as the ABC-analysis. ABC-analysis typically groups items in classes referred to as A, B, and C based...

Measuring delivery reliability: be aware of the pitfalls!

Delivery reliability; there hardly is a company that is not measuring this in one way or another. After all, we all want and need to be customer-oriented, right?...

The road towards perfect flow in your supply chain (part 1)

There are many approaches and methods that focus on improving flow. Material Requirements Planning (MRP), Drum-Buffer-Rope (DBR) or Simplified-DBR (SDBR), Demand-Driven...

How to use EPEI to track your progress towards flow

When working in machine shops, I’m always surprised to hardly see any flow-related indicators. You’ll typically see data on OEE-related indicators like reject rates,...

Leveling: why customers and suppliers don’t buy into it

When you have been working on improving flow across the value stream, I am sure you have come across a situation where customers seem to order infrequently and...

Muri and Mura: heed Kingman’s seven lessons

More than 100 flights canceled and delays of sometimes over 20 hours with consequences for over 76,000 travelers. That was the sad result of Vueling's planning policies...
Measuring delivery reliability: be aware of the pitfalls! | THE JIT COMPANY
Contact Us          Privacy Policy          Terms of Use          Disclaimer

© 2020 THE JIT COMPANY, a label of Dumontis B.V.

Pin It on Pinterest

Share This

Thanks for sharing!

We hope you liked this article, and if so, we'd be grateful if you would share it with your network!