top of page

Supply Chain Planning: Beware The 80/20 Rule

When time to results is critical and lots of inputs are involved, leveraging the 80/20 rule (also known as the Pareto principle) is a tried-and-true approach to help focus attention on a subset of the inputs to get a quicker result, but it can also have its draw backs.  Most of us have heard the 80/20 rule in various ways: “80% of problems originate from 20% of sources” or “80% of revenue is driven by 20% of customers” and so on.  Even in cases where the rule holds true by itself, there are times when over-using it becomes problematic and impedes business outcomes.  This is particularly true in supply chain planning and operations which commonly involve many inputs, multiple stakeholders, volumes of data, and multitudes of variables – from raw materials to production to distribution to the various “what if” scenarios that the planning team or executive leadership would like to test.


The Problem


Many existing supply chain planning processes and technologies attempt to factor in the relevant inputs that the business needs to operate or plan, but the massive number of inputs, volumes of data, and the varied sources of each can still stymie systems and frustrate supply chain professionals.  This leads to constrained benefits, potentially missed targets, more grueling work and longer hours than ideal and, in turn, disheartened and fatigued supply chain professionals, doubtful stakeholders, and ultimately, suboptimal business results.


Many of these problems are rooted in excessive summarization or insufficient detail whether by individual team members, the process, a system, or some combination of the three.  As a few examples, consider a team member who maintains over-generalized sets of changeover data or incomplete item master data.  Though there are data components missing, they are typically the lower volume items, the team member has any number of other urgent priorities that consume their time, and the sources of the missing information rests with other stakeholders or other functions altogether.  Thus, the 80/20 rule is used as justification to ignore data gaps because most of what matters will be addressed with the data that is available.


In isolation that logic may be valid, but supply chain functions rarely occur in isolation even if a given team member is not involved in the other stages.  Consider that the same item master must be used to convert quantities to weights.  Meanwhile, a logistics and transportation team member may have a similarly incomplete or generalized set of transport options, costs, capacity limits, availability, etc.  In isolation, the item master gaps may only marginally impact certain options and the transportation gaps may only impact a smaller, rarely used subset of transportation options.  In reality, the item master data is often combined with transportation data to calculate weights, volumes, plan accurate pickups and deliveries or optimally arrange available transportation costs.  In this situation the gaps in each data set begin to compound resulting in increased suboptimization and errors.  These errors frequently present themselves as ever-present troubleshooting or “fire drills” depending on urgency but the key is they often seen as a natural by-product of the job even though they are potentially avoidable.


In another example, that same item master has to be combined with production and MRP data to ensure that the right demand volume and timing gets to raw material providers or co-manufacturers.   However, some of those data sets have also been generalized, improperly structured, or are missing what seem to be smaller, less important components.   Again, the 80/20 rule, while helpful in isolation, begins to compound creating growing challenges and hurdles.


The 80/20 rule (i.e. 80% accuracy or completeness) compounded just 4 times leaves you with 41% accuracy or completeness.  Consider just few different sources of inputs to supply chain planning:

  • Raw Material & Ingredients

  • Item Master Data

  • Historical Order Data

  • Forecast Order Data

  • Plants x Production Lines x Options

  • BOM Data

  • Warehouse/Storage Capacity & Costs

  • Distribution Modes & Costs

Each of these general sources have many subcomponents of data and inputs.  For instance, item master data may include weight, dimensions, and unit count per case among many other likely variables.  Each production step typically involves multiple configuration and changeover options that are further multiplied by the number of lines or plants.  Distribution options involve point of origin, mode of travel, zones/distance to destination, time to destination, capacity limitations, and cost differences.


An item master that is only 80% complete then has to be factored against a BOM that is also only 80% accurate and run through production options that have only taken into account the usual permutations commonly seen in the past (versus many more potential options), and a set of distribution options that are simplified to generalities.  That’s already four high-level inputs that, if each is only 80% accurate, would leave you with a combined accuracy of 41%.

In reality, the numbers and weights often vary substantially. One set of inputs might have 80% accuracy/completeness while a second set is 100% complete a third set is at 95% accuracy and a fourth set might remain at 70% accuracy/completeness.  The compounded accuracy might not be as low as 41% but even at these levels, that compounded value drops into a range that would challenge any supply chain organization to operate efficiently and accurately, undermining resilience and flexibility.   And remember that every hour or day spent reacting (i.e. troubleshooting errors, misalignments, running “fire drills”) is time not spent on proactive supply chain planning, management, and execution; and having a largely proactive mode of operation is essential to establishing resilience.


How it Manifests


The example provided uses a high level view which can make the existence of such problems seem obvious but there are multiple subtle ways that they tend to manifest:


Distributed Sources & Responsibility.  The responsibility for each of these pieces inputs or data rests with multiple individuals across an organization.  In practice, a given team member may have the expertise to know that the set they’re relying on from another person is over-generalized or incomplete, they may not have the time to address it, or they may simply feel it’s outside of their scope and, right or wrong, they’d rather not question or interfere in a function or area of responsibility outside of comfort zone.


Limited Data Management Capabilities.  In other cases, the data management and analytics required to achieve the requisite level of detail that would deliver better or optimal outcomes can exceed the expertise of the average supply chain professional, which is sometimes why it is generalized from the start.  In one real-world example, a mid-market food manufacturer sought to improve their plant operations.  To do so, demand patterns and changeovers needed to be initially assessed and aggregated from hundreds of millions of rows of data.  However, the company’s planning and plant professionals did not have the expertise required to parse and analyze that volume of data so they did the best they could with simpler tools like Excel, unwittingly leaving valuable distinctions and patterns undiscovered.


Rationalized Avoidance.  As a close cousin to limited data management capability, this can present itself wrapped in the rationale that there is no need to “boil the ocean” or “just use the 80/20” rule without fully recognizing what’s lost in doing so.  Having worked both within industries and with consulting firms, I’ve lost count of the times I’ve heard don’t “boil the ocean” or “apply 80/20” in cases where the missing detail was important and directly applicable to the results we were seeking to achieve.  What I’ve found is that the notion of difficulty is relative and what some consider as “boiling the ocean” is actually focused and remarkably achievable with knowledge of right tools and methods.

The reasons are numerous, but the result is that incomplete or over-generalized information is factored together to deliver answers needed by a boss or the business in the immediate term while leaving the more subtle, often slower-manifesting impacts to reveal themselves over weeks, months, or years and in ways where the root cause is anything but clear.


Is It Really A Big Deal?


Is it really a big deal?  It depends.  Given a choice of vehicle for a career defining race that also provides a nice financial prize to the winner, would you opt for the luxury sports car on the left or the Model T on the right?  If you chose the luxury sports car, then it’s probably a big deal to you.



When we’re speaking of specific inputs, data sets, or a specific supply chain issue or task, it’s easy for the big picture and end-to-end results to seem like a distant, philosophical concept but the impacts are very real.  I like to use the analogy of sports car engineering.  If an otherwise high-end sports car was designed and allowed to function, from an engineering standpoint, with incomplete data, rules, and generalizations that are sometimes accepted in supply chain environments, you wouldn’t have a high-end sports car that revs powerfully, drives smoothly, and handles with exacting precision, you’d have a something closer to an out-of-tune Model-T that backfires, misfires, and sputters along.  It may get you there, but it’s not engineered with the precision or to the level of detail that’s required to accelerate, turn, stop, and handle in all respects like a high-end sports car…in supply chain lingo, the terms are resilience, speed, and flexibility.


What To Do About It?


Expand Data Management Capabilities.  To begin to correct this problem, a supply chain planning organization requires expertise and skills that enable identification of when and what key inputs and data are missing or over-generalized as well as the most effective ways to capture it and keep it current.  This requires knowledge of data management tools that go beyond the capabilities of common office applications and venture into the realm of big data or enterprise-grade data management and analytics.  These doesn’t mean that these more powerful, complex tools are used on a day-to-day basis and as frequently as spreadsheets and word processors but having the knowledge of when and where to deploy them will be extremely valuable to your organization.


Selectively Sacrifice Speed for Improved Accuracy.  There are certainly times when the urgency of a request will require leveraging the 80/20 rule and require accepting the degrees of imperfection in inputs in exchange for speed.  However, the degrees of urgency should be considered carefully, and time allowed for team members to clean-up, correct, and achieve the goal of complete and accurate inputs and data sets.  Rarely do team members in any supply chain organization have down time to complete nice-to-have tasks as most people go from juggling one set of important priorities and “fire drills” to another.  However, team members and leaders must take a shrewd look at the urgency versus time-to-complete and allow additional time to improve and correct inputs.  Not only will this pay recurring dividends in the future, but it will also support improved outcomes and future time savings derived from not having to work around gaps and missing information.


Build Modeling and Optimization Expertise.  In production environments, proficiency in modeling and optimization concepts become extraordinarily beneficial in identifying data components, structures, and methods of managing them that enable modeling and what-if scenario planning close to the real-world set of possibilities while maintaining efficient and effective data collection and management approaches.  It’s also important to note that this typically is not a one-time requirement.  A business and supply chain operation changes as it becomes more efficient and the models, methods, and inputs will change as well, requiring refinement as supply chain management capabilities and operations improve.


The 80/20 rule and generalizations certainly have a place, going into too much detail for a given objective can be real, and a good outcome obtained in a short period can be exactly what’s needed as opposed to the best outcome over a longer period, but each of these should be weighed judiciously against the bigger picture supply chain and business objectives.  Starting out with incomplete inputs, data gaps, or generalized information sets may be a necessity, but eliminating them should be prioritized to allocate time to improve and complete these inputs as gaps are encountered are encountered.   Identifying when you must use the 80/20 rule versus when it’s better to incur additional time or effort in the short term for a more sustainable result in the long-term will be a key factor in advancing your organization’s supply chain performance and operational outcomes.

 
 
 

Comments


Contact Me

Tel: 646-783-9386

kphillips@vcgroup.com

  • LinkedIn Social Icon
  • Twitter Social Icon

© 2020 by Supply Chain Science

Thanks for submitting!

bottom of page