Category: Quality

Strategies to improve Quality are typically founded on rework and scrap reduction. Measurement of Quality is not limited to the Quality factor for OEE. The cost of non-Quality is also a key metric to managing overall quality performance.

Lean – Burnout, Apathy, and Pareto’s Law

Example Pareto chart
Typical Application to Analyze Quality Defects

The Premise:  Pareto’s Law

The late Josheph Juran introduced the world to Pareto’s Law, aptly named after Italian economist Vilfredo Pareto.  Many business and quality professionals alike are familiar with Pareto’s law and often refer to it as the 80 / 20 rule.  In simple terms, Pareto’s Law is based on the premise that 80% of the effects stem from 20% of the causes.

As an example, consider that Pareto’s Law is often used by quality staff to determine the cause(s) responsible for the highest number of defects as depicted in the chart to the right.  From this analysis, teams will focus their efforts on the top 1 or 2 causes and resolve to eliminate or substantially reduce their effect.

In this case, the chart suggests that highest number of defects are due to shrink followed by porosity.  At this point a problem solving strategy is established using one of the many available tools (8 Discipline Report, 5 Why, A3) to resolve the root cause and eliminate the defect.  Over time and continued focus, the result is a robust process that yields 100% quality, defect free, products.

In practice, this approach seems logical and has proven to be effective in many instances.  However, we need to be cognizant of a potential side effect that may be one of the reasons why new initiatives quickly wane to become “the program of the day.”

The Side Effects:  Burnout and Apathy

Winning the team’s confidence is often one of the greatest challenges for any improvement initiative.  A common strategy is to select a project where success can be reasonably assured.  If we apply Pareto’s Law to project selection, we are inclined to select a project that is either relatively easy to solve, offers the greatest savings, or both.

In keeping with the example presented in the graphic, resolving the “shrink” concern presents the greatest opportunity.  However, we can readily see that, once resolved, the next project presents a significantly lower return and the same is true for each subsequent project thereafter.

Clearly, as each problem is resolved, the return is diminished.  To compound matters, problems with lower rates of recurrence are often more difficult to solve and the monies required to resolve them cannot be justified due to the reduced return on investment.  In other words, we approach the point where the solution is as elusive as “the needle in a haystack” and, once found, it simply isn’t feasible to fund it.

The desire to resolve the concern is significantly reduced with each subsequent challenge as the return on investment in time and money diminishes while the team continues to expend more energy.  Over extended periods of time the continued pursuit of excellence leads to apathy and may even lead to burnout.  As alluded to earlier, adding to the frustration is the inability to achieve the same level of success offered by the preceding opportunities.

The Solution

One of the problems with the approach as presented here is the focus on resolving the concern or defect that is associated with the greatest cost savings.  To be clear, Pareto Analysis is a very effective tool to identify improvement opportunities and is not restricted to just quality defects.  A similar Pareto chart could be created just as easily to analyze process down time.

Perhaps the real problem is that we’re sending the wrong message:  Improvements must have an immediate and significant financial return.  In other words, team successes are typically recognized and rewarded in terms of absolute cost savings.  Not all improvements will have a measurable or immediate return on investment.  If a condition can be improved or a problem can be circumvented, employees should be empowered to take the required actions as required regardless of where they fall on the Pareto chart.

To assure sustainability, we need to focus on the improvement opportunities that are before us with a different definition of success, one with less emphasis on cost savings alone.  Is it possible to make improvements for improvements sake?  We need to take care of the “low hanging fruit” and that likely doesn’t require a Pareto analysis  to find it.

Finally, not all improvement strategies require a formal infrastructure to assure improvements occur.  In this regard, the ability to solve problems at the employee level is one of the defining characteristics that distinguishes companies like Toyota from others that are trying to be like them.  Toyota and the principles of lean are not reliant on tools alone to identify opportunities to improve.

Until Next Time – STAY lean!

Vergence Analytics

Twitter:  @Versalytics

OEE in an imperfect world

A selection of Normal Distribution Probability...
Image via Wikipedia

Background: This is a more general presentation of “Variation:  OEE’s Silent Partner” published on January 31, 2011.

In a perfect world we can produce quality parts at rate, on time, every time.  In reality, however, all aspects of our processes are subject to variation that affects each factor of Overall Equipment Effectiveness:  Availability, Performance, and Quality.

Our ability to effectively implement Preventive Maintenance programs and Quality Management Systems is reflected in our ability to control and improve our processes, eliminate or reduce variation, and increase throughput.

The Variance Factor

Every process and measurement is subject to variation and error.  It is only reasonable to expect metrics such as Overall Equipment Effectiveness and Labour Efficiency will also exhibit variance.  The normal distribution for four (4) different data sets are represented by the graphic that accompanies this post.  You will note that the average for 3 of the curves (Blue, Red, and Yellow) is common (u = 0) and the shapes of the curves are radically different.  The green curve shows a normal distribution that is shifted to the left, the average (u) is -2, although we can see that the standard deviation for this distribution is better than that of the yellow and red curves.

The graphic also allows us to see the relationship between the Standard Deviation and the shape of curve.  As the Standard Deviation increases, the height decreases and the width increases.  From these simple representations, we can see that our objective is to reduce to the standard deviation.  The only way to do this is to reduce or eliminate variation in our processes.

We can use a variety of statistical measurements to help us determine or describe the amount of variation we may expect to see.  Although we are not expected to become experts in statistics, most of us should already be familiar with the normal distribution or “bell curve” and terms such as Average, Range, Standard Deviation, Variance, Skewness, and Kurtosis.  In the absence of an actual graphic, these terms help us to picture what the distribution of data may look like in our mind’s eye.

Run Time Data

The simplest common denominator and readily available measurement for production is the quantity of good parts produced.  Many companies have real-time displays that show quantity produced and in some cases go so far as to display Overall Equipment Effectiveness (OEE) and it’s factors – Availability, Performance, and Quality.  While the expense of live streaming data displays can be difficult to justify, there is no reason to abandon the intent that such systems bring to the shop floor.   Equivalent means of reporting can be achieved using “whiteboards” or other forms of data collection.

I am concerned with any system that is based solely on cumulative shift or run data that does not include run time history.  As such, an often overlooked opportunity for improvement is the lack of stability in productivity or throughput over the course of the run.  Systems with run time data allow us to identify production patterns, significant swings in throughput, and to correlate this data with down time history.  This production story board allows us to analyze sources of instability, identify root causes, and implement timely and effective corrective actions.  For processes where throughput is highly unstable, I recommend a direct hands-on review on the shop floor in lieu of post production data analysis.

Overall Equipment Effectiveness

Overall Equipment Effectiveness and the factors Availability, Performance, and Quality do not adequately or fully describe the capability of the production process.  Reporting on the change in standard deviation as well as OEE provides a more meaningful understanding of the process  and its inherent capability.

Improved capability also improves our ability to predict process throughput.  Your materials / production control team will certainly appreciate any improvements to stabilize process throughput as we strive to be more responsive to customer demand and reduce inventories.

Until Next Time – STAY lean!

Vergence Analytics

Twitter:  @Versalytics

Quality is Priceless

The price tag for Toyota’s recent recall campaigns is estimated to be more than $2 Billion and the loss in share holder value is likely many times more than this.  Yet we remain optimistic and anticipate that Toyota will make it through this crisis.  We can only imagine what this kind of money could buy if wasn’t spent on repairing vehicles.

In our previous posts we differentiated between design and process failures.  Today we learned of yet another Toyota recall issued yesterday.  This time 8,000 0f the 2010 four-wheel drive Toyota Tacoma pickup trucks are being recalled for possible cracks in the front drive shaft.  In this case the supplier, Dana Corporation, discovered a problem with their manufacturing process that may also have affected parts supplied to Nissan and Ford as well.  Click here to read the full story.

We are reminded of the book titled “Quality is Free” written by the late Philip B. Crosby.  Many manufacturers around the world have learned that the cost of failure knows no bounds.  While it is possible to calculate the costs to repair defective products, the losses incurred due to lost sales, law suits, pending investigations, public relations, and reduced consumer confidence in general will never be known.

Because businesses are not charities, we can only expect that the price of future product offerings will include a portion of the company’s latest financial liabilities.  Naturally, if every product sold performed as expected or better and without flaw or incident, we could continue to focus on improving the quality of both products and processes.

It has been said that success breeds failure.  Success creates contentment, giving rise to complacency, and in turn results in lost focus.  So, what is the value of a process that yields perfect products?  In today’s global economy quality isn’t just a given – quality is priceless.

Until Next Time – STAY lean!

Contingency Plans – Crisis Management in Lean Organizations

Contingency Planning For Lean Organizations – Part IV – Crisis Management

In a previous post we eluded that lean organizations are likely to be more susceptible to disruptions or adverse conditions and may even have a greater impact on the business.  To some degree this may be true, however, in reality, Lean has positioned these organizations to be more agile and extremely responsive to crisis situations to mitigate losses.

True lean organizations have learned to manage change as normal course of operation.  A crisis only presents a disruption of larger scale.  Chapter 10 of Steven J. Spear’s book, “Chasing the Rabbit”, exemplifies how high velocity, or lean, organizations have managed to overcome significant crisis situations that would typically cripple most organizations.

Problem solving is intrinsic at all levels of a lean organization and, in the case of Toyota, problem solving skills extend beyond the walls of the organization itself.  It is clear that an infrastructure of people having well developed problem solving skills is a key component to managing the unexpected.    The events presented in this chapter demonstrate the agility that is present in a lean organization, namely Toyota in this case and it’s supplier base.

Training is a Contingency

Toyota has clearly been the leader in Lean manufacturing and even more so in developing problem solving skills at all levels of the organization company-wide.  The primary reason for this is the investment that Toyota puts into the development of people and their problem solving skills at the onset of their employment with the company.  The ability to see problems, correct them in real time, and share the results (company-wide) is a testament to the system and it’s effectiveness has been proven on many occassions.

Prevention, preparation, and training (which is also a form of prevention) are as much an integral part of  contingency planning as are the actual steps that must be executed when a crisis situation occurs.  Toyota has developed a rapid response reflex that is inherent in the organization’s infrastructure to rapidly regain it’s capabilities when a crisis strikes.

Crisis Culture

We highly recommend reading Steven J. Spear’s “Chasing the Rabbit” to learn and appreciate the four capabilities that distinguish “High Velocity” organizations.  The key to lean is creating a cultural climate that is driven by the relentless pursuit of improvement and elimination of waste.  Learning to recognize waste and correcting the condition as it occurs requires keen observation and sharp problem solving skills.

Creating a culture of this nature is an evolutionary process – not revolutionary.  In many ways the simplicity of the four capabilities is it’s greatest ally.  Instilling these principles and capabilities into the organization demands time and effort, but the results are well worth it.  Lean was not intended to be complex and the principles demonstrated and exemplified in Chasing the Rabbit confirm this to be true.  This is not to be construed as saying that the challenges are easy … but with the right team they are certainly easier.

Until Next Time – STAY Lean!

"Click"

Contingency Planning For Lean Operations – Part I

Contingency Planning For Lean Operations – Part I

Lean operations are driven by effective planning and efficient execution of core activities to ensure optimal performance is achieved and sustained.  The very nature of lean requires extreme attention to detail through all phases of planning and execution.  Upstream operations simply cannot tolerate any disruptions in product supply or process flow without the risk of incurring significant downtime costs or other related losses.

Effective risk management methods, contingency plans, and loss prevention strategy are critical components of successful operations management in a lean operation.  Risk management and preventing disruptions is the subject of contingency planning and requires the participation of all team members.

Successful contingency planning assures the establishment of an effective communication strategy and identification of core activities and actions required.  Contingency plans may require alternative methods, processes, systems, sources, or services and must be verified, validated, and tested prior to implementation.

Understanding and assessing the potential risks to your operation is the basis for contingency planning with the objective to minimize or eliminate potential losses.

Inventory represents the most basic form of contingency planning.  Safety stock or buffer inventories are typically used to minimize the effects of equipment downtime or disruptions in the supply chain. 

The levels of inventory to maintain are dependent on a number factors including Lead Time, Value, Carrying Cost, Transit Time (Distance), Shelf Life, Minimum Order Quantities, Payment Terms, and Obsolescence.

Why is this relevant?

Material and Labour represent two key resources that may be influenced by external factors that are beyond the control of any company policy or practice.  Internally controlled or managed resources such facilities, equipment, and tooling are less susceptible to unknown elements.  For the purposes of this discussion, we will examine Labour in a little more detail.

The H1N1 virus, originally known as the Swine Flu, is the latest potential health pandemic since the outbreak of SARS only a few years ago.  The government has been struggling to organize mass immunization clinics and to engage the media to aid in the cause.  In the meantime, the potential impact of the H1N1 virus on your operation remains to be an unknown. 

Experts have commented to the media that the lessons from the SARS outbreak have still not been learned.  One would expect that past practices would have already been adopted into new best practices from our experiences with other similar events in our history.

Government agencies at all levels (Federal, Provincial, and local) have mismanaged the activities required to procure and distribute the vaccine, and failed to provide an effective communication and immunization strategy to ensure the risk to public health was minimized and the at the very least understood.

The lack of coordination and accountability for the success or failure of the communication strategy, procurement and distribution of the vaccine, and other related activities are strong indicators that the planning process did not consider the infrastructure requirements and relationships needed between levels of government.

The lack of an effective communication strategy introduced confusion and speculation in the media and the general public.  Mass education only seemed to become more aggressive as incidents of severe H1N1 complications and related deaths were reported in the media.

If this really was a pandemic event, many operations today would (and may still) be adversely affected due to direct or indirect (supply chain) labour shortages.  Do you have contingency plans in place to address this concern?

It could be argued that “if we are affected to this extent, then our customers will be as well.”  This is not necessarily true unless your customers and / or suppliers are located in the same immediate area or region of your business.

People travel all the time, whether they are commuting to work from out-of-town or traveling to or arriving from a foreign country on business.  The source of exposure is beyond your immediate control. 

What other elements can directly impact labour?  We will explore some of these in our next post.  In the meantime, keep your hands washed and remember to cough into your sleeve.

Until Next Time – STAY Lean!

Unexpected and Appreciated – Uncommon Courtesy:  This morning, a person cut into the drive through lane ahead of us – not realizing the gap in the line was there for thru traffic.  Recognizing the error in drive through etiquette and to make amends, we were pleasantly surprised by the “free” coffee at the pick up window.  Thank you ladies!

How to Solve Problems with Idea Maps

FreeMind 0.9.0 RC4 - Mind Map with User Icons
Image via Wikipedia

Problem solving is a problem in itself for many companies and at times can be one of the most daunting tasks to undertake during the course of an otherwise regular work day.

For some, problems seldom occur while for others this may, unfortunately, be a daily activity.  Since problem solving is not usually part of the typical daily agenda of “routine” activities, our ability to find the time and solve them efficiently and effectively is compromised.

For many, just finding time seems to be one of the greatest challenges and perhaps a problem to be solved in itself.  Sweeping problems under the rug may be efficient but it is certainly not effective.  (So … broom is not the solution we’re proposing).

Using IDEA Mapping Techniques can help you solve problems effectively and efficiently.  IDEA Maps, Process Maps, and Mind Maps are variations on a theme.  We may use the terms interchangeably in the discussion that follows.

Background:

While there are several different approaches and “forms” that can be used to manage the overall problem solving process, the two most critical steps that will determine the effectiveness of the solution are:

  1. Define a Clear and Concise Problem Description / Statement
  2. Determine the Root Cause(s) of the problem defined by the Problem Statement.

While the first step seems relatively simple, the second step requires a little more effort.  There are at least two (2) root causes for most problems that stem from two simple questions:

  • Why Made?
  • Why Shipped?

These questions imply that defective product was made for a reason (process) and it was shipped to the customer undetected (system).  In other words, the customer is not protected from receipt of defective product.

The root cause analysis process forms the basis for all subsequent problem solving activities, including verification, interim and long term corrective actions.  A lot of time can be wasted simply because the real root causes were never identified.

Problem Solving Tools for Root Cause Analysis:

Many different tools can be deployed during the Root Cause Analysis process including Ishikawa Diagrams (Fishbone Diagrams), 5 Why (discussed in a previous post), Fault Tree Analysis, Q&A (Question Board), and Brain Storming to name just a few.

Mind Mapping or Process Mapping is a technique that provides an unconstrained approach to the thinking process for multiple input and contribution streams.  Maps can also be used to identify interactions or relationships to other elements.

Mind Mapping (Process Mapping)

The center of the map contains the problem statement.  We then surround the problem statement with potential inputs or contributors to the problem.  These statements in turn become the “center” of additional levels of inputs and contributors.  In some respects, the process map can be very similar to a Bloom Diagram and certainly supports the logic found with fishbone diagrams.

The   The draw back to “Mapping” is that most are usually developed on Whiteboards and not easily or readily translated into a software solution.

Software Solutions and Templates

While there are many spreadsheet based solutions, few provide an effective interface to support the use of mapping techniques.  Even most fishbone diagrams developed in Excel are quirky and awkward at best.

While we typically do not endorse specific software solutions, however, FREEMIND is one software that we consider to be among the best of available solutions and can be downloaded free of charge.  The download and installation process only requires a few minutes.

The developers of FREEMIND provide a clean, intuitive solution for creating and maintaining process or mind maps.  While other commercial packages are available, FreeMind is more than capable of handling most problem solving challenges and quite simply is time and money well saved.

The FreeMind homepage provides a better description of the software and it’s capabilities than we could provide here.  Our goal was to introduce “Mapping” as an effective and efficient tool that can be used in the problem solving process.

After spending some time with the software, you will quickly discover that there are many other opportunities where this software can serve you.  We have a mind map that we use to manage weekly and daily reports, another for key metrics, and yet another for our business structure.  The ability to use hyperlinks makes it an easy process to access external reports and resources .

The FreeMind main page provides an excellent overview and provides examples of their software in action.  This is definitely worth looking into and may just save some time for real problem solving.

We are presently using FreeMind version 0.9.0 RC 6.

Home: http://freemind.sourceforge.net/

Copyright 2000-2009 Joerg Mueller, Daniel Polansky, Christian Foltin, Dimitry Polivaev, and others.

Click here to see a sample process map to achieve delivery of 100% on time – in full:  Mapping with FreeMind.  We have also uploaded two documents (one of the original map and a word document showing a pictorial of the mind maps we created) into our Free Downloads box.  See the ORANGE box on the sidebar to get your copy.

If you have a copy of FreeMind, simply change the extension on our Delivery file from “.txt” to “.mm”  Of course, don’t type the quotes.  This is just a sample for example purposes only.  Feel free to edit or modify these files  in any manner you choose.

If you would like to learn more about IDEA Mapping we would encourage you to also read Idea Mapping – How to Access Your Hidden Brain Power, Learn Faster, Remember More, and Achieve Success in Business by author Jamie Nast (twitter:  @JamieNast) or you can visit the website at:  http://www.ideamappingsuccess.com/.

Click here to review or purchase your copy of Idea Mapping: How to Access Your Hidden Brain Power, Learn Faster, Remember More, and Achieve Success in Business

Until Next Time – STAY lean!

IDEA Mapping, Published by John Wiley & Sons, Inc, Hoboken, New Jersey, Published simultaneously in Canada (ISBN-13:  978-0-471-78862-1, ISBN-10:  0-471-78862-7), 268 pages.  The book includes a companion CD-ROM featuring a 21 day trial for Mindjet MindManager 6.

Welcome to LeanExecution!

Welcome! If you are a first time visitor interested in getting started with Overall Equipment Effectiveness (OEE), click here to access our very first post “OEE – Overall Equipment Effectiveness“.

We have presented many articles featuring OEE (Overall Equipment Effectiveness), Lean Thinking, and related topics.  Our latest posts appear immediately following this welcome message.  You can also use the sidebar widgets to select from our top posts or posts by category.

Free Downloads

All downloads mentioned in our articles and feature posts are available from the FREE Downloads page and from the orange “FREE Downloads” box on the sidebar.  You are free to use and modify these files as required for your application.  We trust that our free templates will serve their intended purpose and be of value to your operation.

Visit our EXCEL Page for immediate access to websites offering answers and solutions for a wide variety of questions and problems.  Click here to access the top ranking Excel Dashboards.  Convert your raw data into intelligent data to drive intelligent metrics that will help you to analyze and manage your business effectively.

Questions, Comments, Future Topics

Your comments and suggestions are appreciated.  Feel free to leave a comment or send us your feedback by e-mail to LeanExecution@gmail.com or VergenceAnalytics@gmail.com.  We respect your privacy and will not distribute, sell, or share your contact information to any third parties.  What you send to us stays with us.

Subscribe to our blog and receive notifications of our latest posts and updates.  Simply complete the e-mail subscription in the sidebar.  Thank you for visiting.

Until Next Time – STAY lean!

Vergence Analytics

How to Calculate the Quality Factor for OEE

How to correctly calculate the Quality Factor for OEE

Most people assume that the quality factor for Overall Equipment Effectiveness (OEE) is determined by simply calculating the yield of good parts from the total parts produced.  Unfortunately, this logic does not hold true when calculating the quality factor beyond the individual part or process.

We will show you how to correctly calculate the Quality factor and determine a truly weighted result that is consistent with the definition of Overall Equipment Effectiveness.  Although OEE itself does not have a unit of measure, it is based on the effective use of time.

The Quality Factor Defined

Although OEE itself is expressed as a percentage, all of the individual OEE factors are based on time.  Yes, even the quality factor:

The quality factor measures the percentage of time that was used to make or manufacture an acceptable quality product at rate or standard.

We have witnessed too many organizations that attempt to immediately convert the Quality Factor into a Cost of Non-Quality, Parts / Million (PPM), or other type of metric.  This is not the intent of the quality factor from an overall equipment effectiveness perspective.  Again, OEE measures effective use of time.

While it is not our intent to delve into a cost of non-quality discussion, we agree that understanding the cost drivers is in the best interests of the company to minimize losses.  This includes any investment that must be made to improve OEE.

We would also encourage you to download a copy of our Excel spreadsheets (see the BOX file on the sidebar).  There are no charges or fees for downloading these files and we request that these products remain available as such.  Now, let’s move on to the Quality Factor.

Free Download ->>> Click here to download a copy of the example developed in this post! <<<-Free Download

Where did the time go?

By definition, OEE is used to determine how effectively the time for a given machine, process, or resource is used: 

  • Availability:  Planned (Scheduled) versus Unplanned downtime
  • Performance:  Standard versus Actual cycle time
  • Quality:  Value Added versus Non-Value Added time

All of the OEE factors pertain to time.  From our definition above, the factors are independent of people (labour) required, parts produced, defective product, or the value of these items.  However, when we review many OEE templates, and more specifically the quality factor calculation, the time element is lost.

The true Quality Factor formula

The simple yield calculation works for a single process or part number but not for multiple machines or part numbers.  A simple example will demonstrate the correct way to calculate the Quality factor for a single part.  We will expand on this simple example as we go along.  Click here to download your free copy of the spreadsheet used in this post.

Note:  We are using the standard rate for the Quality time calculations as the Availability and Performance factors already account for downtime and cycle time losses respectively.  Quality is based on the pure standard rate or cycle time only.

EXAMPLE:  Machine A – Production Summary

Part Number

Rate / Minute

Total Produced

Defective

Quantity

Yield %
Quantity

1

2

800

10

98.75%

Totals

——-

800

10

98.75%

Averages

2

800

10

98.75%

As we can see from the table above, machine A produces part number 1 at a standard rate of 2 parts / minute.  A total of 800 parts are produced of which 10 are defective and scrapped.  The simple yield formula will correctly calculate the Quality factor as:

Quality Yield = (800 – 10) / 800 = 790 / 800 = 98.75%

From an OEE perspective, however, our interest is not how many parts were scrapped, but rather, how much machine or process time did we lose by making them.  From our example, 10 defective parts results in a loss of 5 minutes: 

Lost Time = 10 parts / (2 parts / minute) = 5 minutes

The quality factor actually tells us how effectively the time was used to make good or acceptable parts.  From our example, the time required to make ALL parts at the standard rate is 400 minutes (800 parts / 2 parts / minute = 400).  Our Quality factor can easily be calculated as follows: 

  • Value Added Time = Total Time – Non-Value Added Time
  • = 400 – 5
  • = 395 minutes

Total Time (All Parts) = 400 minutes

Quality Factor = Value Added Time / Total Time
                               = 395 / 400
                               = 98.75%

Although the results in this case are the same, the method is uniquely different.  Since this is based on a single machine, the cycle times are cancelled in the formula as shown below:

= (800 – 10) / 2 parts per minute / (800 / 2 parts per minute)

The YIELD pitfall revealed:

Our calculation method becomes relevant when we start looking at the production of different parts running through the same machine or process.  The easiest way to demonstrate this is by extending our first example.

Let’s assume we are also using machine A to produce two additional part numbers.  The production data is summarized in the table below as follows:

EXAMPLE:  Machine A – Production Summary

Part Number

Rate / Minute

Total Produced

Defective

Quantity

Yield %
Quantity

1

2

800

10

98.75%

2

8

1600

160

90.00%

3

1

800

20

97.50%

Totals

——-

3200

190

94.06%

Averages

4

1067

63

95.42%

If we calculate the Quality factor for machine A, the simple yield formula will provide a misleading result.  Note that we’ve provided the process yield factor for each line item part number as we have already determined that the ime factors cancel for individual parts.

The average Yield % from the table above is 95.42%.  We will demonstrate that this result is also incorrect.  Remember, we’re interested in the percent of total time used to make a quality product (also known as Value Added Time).

The real question is, “What is the overall Quality factor for machine A?”  The simple yield formula would suggest the following:

Simple Yield Quality Factor = (3200 – 190) / 3200 = 3010/ 3200 = 94.06%

This percentage is misleading and – as we will demonstrate – the WRONG result.

Calculating the True Weighted Quality Factor

Let’s take the table from above and expand on it to reflect our TIME based calculations.  We will calculate the time required to produce all parts (Total Time) and the time lost to produce defective parts (Lost Time).  Remember, these times are calculated at the standard cycle time or rate.  The resulting table appears below:

EXAMPLE:  Machine A – Production Summary

Part Number

Rate / Minute

Total Produced

Total Time

Defective

Quantity

Lost Time

Yield %
Time

1

2

800

400

10

5

98.75%

2

8

1600

200

160

20

90.00%

3

1

800

800

20

20

97.50%

Totals

——-

3200

1400

190

45

96.79%

Averages

4

1067

467

63

15

95.42%

 From this table, we can quickly calculate the true weighted quality factor as follows:

           Quality Factor = Value Added Time / Total Time
                               = (1400 – 45) / 1400
                               = 1355 / 1400
                               = 96.79 %

Putting it ALL together

From the discussion above, we have combined the results into the table below:

EXAMPLE:  Machine A – Production Summary

Part Number

Rate / Minute

Total Produced

Total

Time

Defective

Quantity

Lost Time

Yield %
Quantity

Yield %
Time

Delta

1

2

800

400

10

5

98.75%

98.75%

0.00%

2

8

1600

200

160

20

90.00%

90.00%

0.00%

3

1

800

800

20

20

97.50%

97.50%

0.00%

Totals

——-

3200

1400

190

45

94.06%

96.79%

2.72%

Averages

4

1067

467

63

15

95.42%

95.42%

0.00%

The true weighted quality factor can be found in the Yield % Time column (96.79%).  This result fits the true definition of Overall Equipment Effectiveness. 

The table also shows that the differences between the methods can lead to a significant variance between the results (96.79% – 94.06% = 2.72%): 

  • = 94.06% (Simple)
  • = 95.42% (Average)
  • = 96.79 % (Weighted)

We can quickly prove which answer is correct quite easily.  Referring to the table below, the only factor that resulted in the correct time calculations is the Yield Time % factor (96.79%).  The table shows that the true Value Added Time or Earned Time is 1355 minutes and the total time lost due to defective parts is 45 minutes.  Exactly what we expected to find based on our earlier calculations.

Quality Factor – Validation Table – All Times are in minutes

Method

“Yield %”

Total Time

Earned

Lost Time

Delta Time

Yield Quantity %

94.06%

1400

1316.9

83.1

38.1

Average Yield %

95.42%

1400

1335.8

64.2

19.2

Yield Time %

96.79%

1400

1355.0

45.0

0.0

What does all this mean in terms of time?  The results shown in this table clearly demonstrate that a seemingly small delta of 2.72% between the different methods of calculating the Quality Factor can be significant in terms of time.  The Delta time indicated in the table is the difference between the calculated lost time for Method and the actually calculated lost time of 45 minutes.

If this machine was actually scheduled to run 450 minutes per shift on 2 shifts the results would be even more dramatic over the course of a year.  Assuming the machine is loaded with the same part mix and there are 240 working days per year:

Annual Working Time = 240 * 450 * 2 = 216,000 minutes

The following table summarizes the results on an annualized basis: 

Quality Factor – Annualized Results – All Times are in minutes

Method

“Yield %”

Total Time

Earned

Lost Time

Delta Time

Yield Quantity %

94.06%

216,000

203,169.6

12,830.4

5896.8

Average Yield %

95.42%

216,000

206,107.2

9892.8

2959.2

Yield Time %

96.79%

216,000

209,066.4

6933.6

0.0

The “Yield Quantity %” method indicates the actual lost time that could be incurred annually is 12830.4 minutes (28.51 shifts).  Relative to our “Yield Time %” method, this is overstated by 5896.8 minutes, the equivalent of just over 13 shifts.  Similarly, the “Average Yield %” method indicates a total lost time of 9892.8 minutes (21.98 shifts).  Relative to our “Yield Time %” method, this is overstated by 2959.2 minutes or approximately 6.6 shifts.  This further exemplifies the need to understand the correct way to calculate the Quality Factor.

Let’s continue to re-affirm the validity of our calculation method.

Individually Weighted Quality Factors

We will now show you how to calculate the individually weighted quality factors for each part number or line item.  The weighted “time based” quality factor is calculated using the following formula for each line item part number: 

Weighted Line Item = (Value Added Time)
Total Time for All Parts

Where, Value Added Time = Total Time – Lost Time

 We have simplified the table from our example to show the time related factors only.  The table showing the time weighted quality factors from our example is as follows:

Part Number

Rate / Minute

Total Produced

Total Time

Defective

Quantity

Lost Time

Yield %
Time

Weighted % Yield Time

1

2

800

400

10

5

98.75%

28.21%

2

8

1600

200

160

20

90.00%

12.86%

3

1

800

800

20

20

97.50%

55.71%

Totals

 

3200

1400

190

45

96.79%

96.79%

Averages

4

1067

467

63

15

95.42%

 

As we can see from the table, the sum of the “Weighted % Yield Time” percentages is the same as the “Yield % Time”.  The time based formula is once again validated.  We will now take this table one step further to reveal where the real opportunities are to improve the Quality Factor and Overall Equipment Effectiveness.

Improving the Quality Factor

The Yield % or the Weighted Time % do not provide any real indication of the contribution of each part number to the overall weighted quality factor.  We can see from the table that part numbers 2 and 3 both resulted in 20 minutes of lost time compared to part number 1 where only 5 minutes were lost.

Since part numbers 2 and 3 resulted in an equivalent loss of time, we would expect that they would also result in an equal contribution to improve the Quality Factor.  To demonstrate this and to appreciate the real improvement opportunity, we added two more columns to our table as shown below – “Weighted % Process Time” and “Yield % Opportunity”:

Machine A – Weighted Quality Factor – EXAMPLE  

Part Number

Total Time

Weighted

% Process Time

Lost Time

Value Added Time

Yield %
Time

Weighted % Yield Time

Yield % Opportunity

1

400

28.57%

5

395

98.75%

28.21%

0.36%

2

200

14.29%

20

180

90.00%

12.86%

1.43%

3

800

57.14%

20

780

97.50%

55.71%

1.43%

Totals

1400

100.00%

45

1355

96.79%

96.79%

3.21%

Averages

467

33.33%

15

452

95.42%

32.26%

1.07%

The weighted process time was calculated by dividing the process time for each part number by the Total Time.  Once again, we can validate our weighted Quality Time by multiplying the “Weighted % Process Time” by the “Yield %” for each line item. 

To make sure we understand the calculations involved, let’s work out one of the line items in the table.  For Part Number 1, 

  • Weighted % Process Time = 400 / 1400 = 28.57%
  • (1)  Weighted % Yield Time = 28.57% * 98.75% = 28.21%
  • (2)  Weighted % Yield Time = (400 – 5) / 1400 = 28.21 %

Note that we showed two ways to demonstrate the Weighted % Yield Time to once again validate the quality factor calculation method.

The opportunity to improve the OEE for the three part numbers is the difference between the Weighted Process Time and the Weighted Yield Time.  For Part Number 1,

            Improvement = 28.57% – 28.21% = 0.36%

Similarly, the improvements for part numbers 2 and 3 are as follows: 

  • Improvement Part Number 2 = 14.29% – 12.86% = 1.43%
  • Improvement Part Number 3 = 57.14% – 55.71% = 1.43%

Three Key Observations

  1. First, the results of the calculations are consistent the actual observed down time.
  2. Second, although the yields for part numbers 2 and 3 are significantly different, each has the same NET impact to the final OEE result.
  3. Third, when add the total “Yield % Opportunity” (3.21%) for all three part numbers to the total “Weighted % Yield Time” (96.79%), the result is 100%.

This last calculation once again demonstrates that the Quality Factor calculation presented here is consistent with the true definition of OEE.

The formula for the Quality Factor is:

Total Time to Produce Good Parts @ Rate / Total Time to Produce ALL Parts @ Rate

One Final Proof

Our method will produce a result that is consistent with the formula OEE = A * P * Q.  Using our example, it is clear that if Availability and Performance are both 100% and the Quality Factor is 96.79%, the final OEE for all parts will also be 96.79%.

Consistent with the definition of OEE, using our example, 96.79% of 1400 minutes is 1355 minutes.  This is the time that was used to make good or acceptable quality parts.  Similarly then, the time lost making all defective parts is 45 minutes (1400 – 1355 = 45).

The Impact to Operations

OEE is typically used by the Operations team for capacity planning, labour planning, and to determine how much time to schedule for a given resource to produce parts.  The above examples clearly demonstrate that even a small delta can have significant capacity, labour, and scheduling implications.  From this perspective it also becomes a relatively simple task to determine the direct labour costs associated with the production of defective parts.

Purchasing, Materials, Scheduling (Lead Times), Inventory (Stock), Finance, and Quality are all affected by inaccurate data and, in this case, OEE calculation errors.  Of course these errors are not just limited to the Quality Factor itself.

There are other significant losses and costs related to quality as well.  It is not our intent to pursue a discussion on the cost of non-quality as we recognize there are many other factors (internal and external) that must be considered to truly understand the real cost of non-quality for activities such as sorting, inspection, scrap (material losses), rework, re-order, machine time, and administration.

In the real world, someone may just be preparing a plan to improve the Quality of parts running on Machine A to reduce excessive labour and material costs.  We can only wonder what method they used to calculate the “savings”.  Inevitably, many companies approve the project and the funding only to realize the savings fell well short of expectations or will never materialize at all.

In Closing

We would contend that the differences in the calculation method presented here and those found elsewhere are significant.  In our example case, the difference is 2.72%.  We demonstrated that this can be significant when annualized over time.  Similarly, the opportunity for improvements using our method is clear and concise.

Now when someone asks you how to calculate the Quality Factor, you can confidently show them how and tell them why.

The example used in this post can also be downloaded from our BOX File on the sidebar or CLICK HERE.  This is offered at no charge and of course will make it easier for you to use for your own applications.

Thank you for visiting – Until Next Time – STAY lean!

Feel free to send us your feedback – We appreciate your questions, comments, and suggestions.

Privacy Policy:  We do not share, distribute, or sell your contact information.  What you send to us – stays with us.

OEE and the Quality Factor

Many articles written on OEE (ours being the exception), indicate or suggest that the quality factor for OEE is calculated as a simple percentage of good parts from the total of all parts produced.  While this calculation may work for a single line part number, it certainly doesn’t hold true when attempting to calculate OEE for multiple parts or machines.

OEE is a measure of how effectively the scheduled equipment time  is used to produce a quality product.  Over the next few days we will introduce a method that will correctly calculate the quality factor that satisfies the true definition of OEE.  The examples we have prepared are developed in detail so you will be able to perform the calculations correctly and with confidence.

Every time a part is produced, machine time is consumed.  This time is the same for both good and defective parts.  To correctly calculate the quality factor requires us to start thinking of parts in terms of time – not quantity.

If the cycle time to produce a part is 60 seconds, then one defective part results in a loss of 60 seconds.  If 10 out of 100 parts produced are defective then 600 seconds are lost of the total 6000 seconds required to produce all parts.  Stated in terms of the quality factor, 5400 seconds were “earned” to make quality parts of the total 6000 seconds required to produce all parts (5400/6000 = 90%).  Earned time is also referred to as Value Added Time.

As we stated earlier, for a single line item or product, the simple yield formula would give us the same result from a percentage perspective (90 good / 100 total = 90%).  But what is the affect when the cycle times of a group or family of parts are varied?  The yield formula simply doesn’t work.

The quality factor for OEE is only concerned with the time earned through the production of quality parts.  Watch for our post over the next few days and we’ll clear up the seemingly overlooked “how to” of calculating the quality factor.

Until Next Time – STAY lean!

We appreciate your feedback.  Please feel free to leave a comment or send an e-mail with suggestions or questions to leanexecution@gmail.com

We respect your privacy – What you share with us, stays with us.

How to use the 5 WHY approach

Schema of the Process of problem solving. Base...
Image via Wikipedia

The 5 WHY technique, developed by Sakichi Toyoda, is one of the core problem solving tools used by Toyota Motor Corporation and has been adopted and embraced by numerous companies all over the globe.  This technique is unconstrained, providing the team with a high degree of freedom in their thinking process.

As we suggested in our “How to Improve OEE” post, the 5 WHY system is simple in principle.  This simplicity may also be the downfall of this technique unless you take the time to understand and apply the process correctly.  Other problem solving tools, such as Cause and Effect diagrams, allow for the development of multiple solution threads, in turn creating the potential for multiple solutions.

Some root-cause analysis experts have correctly identified some of the short comings presented by the 5-WHY technique including:

  1. The approach is not repeatable – One problem, different teams, different solutions.
  2. The scope of the investigation is constrained by the experience of the team.
  3. The process is self directing based on the evolution of the “WHY + Answer” series.
  4. The TRUE Root Cause may never be identified – Symptoms may be confused for Root Causes.
  5. The inference that a root cause can be determined by a 5 tier “Why + Answer” series.
  6. The Problem Statement defines the Point of Entry. It is imperative to define where the real problem begins.

We would argue that any problem solving or root-cause analysis tool is subject to some short falls in one form or another.  Perhaps even in problem solving there is no definitive solution.  Different problems require different tools and perhaps even different approaches.  In the automotive industry, each customer has a different variation on the problem solving approach to be used and prescribe various tools to be used in the problem solving process.

For this reason, most companies do not rely on one single technique to approach their problem solving challenges.  We would also argue that most companies are typically well versed in their processes (equipment and machines), products, and applications.  As a result, having the right people on the team will minimize the experience concerns.  There is no reason not to include outside expertise in or outside of your current industry.

One concern that can be dismissed from the above list of short comings is the inference that the solution can be found by a 5 tier “Why + Answer” series.  There is no rule as to how many times the “Why + Answer” series should be executed.  Although five times is typical and recommended, some problems may require even deeper levels.  We recommend that you keep going until you have identified a root cause for the problem that when acted upon will prevent its recurrence.

The TRICK:

The technique that we propose in this post will at least provide a method to validate the logic used to arrive at the root cause.  Most 5-WHY posts, web sites, articles, or extracts on the topic seem to focus on a top-down or deductive “Why + Answer” logic sequence.  The challenge then is to have some way to check the “answer” to see if it actually fits.

A simple way to validate the top down logic is to read the analysis in reverse order, from the bottom up, substituting the question WHY with the words “Because” or “Therefore.”  To demonstrate the technique we’ll use an example based on a problem sequence presented in Wikipedia:

I am late for work (the problem):

  1. Why? – My car will not start. (The Real Problem)
  2. Why? – The battery is dead.
  3. Why? – The alternator is not working.
  4. Why? – The alternator belt is broken.
  5. Why? – The alternator belt was well beyond its useful service life and was never replaced.
  6. Why? – The car was not maintained according to the recommended service schedule. (Root Cause)

You probably noticed that we used a 6 “Why + Answer” series instead of 5.  We did this deliberately to demonstrate that 5 WHY is a guideline and not a rule.  Keep asking WHY until you find a definitive root cause to the problem.  We could keep going to determine why the car was not maintained and so on to eventually uncover some childhood fear of commitment but that is beyond the scope of our example.

The CROSS CHECK – Root Cause Analysis Validation

Root Cause: The car was not maintained according to the recommended service schedule.

  1. Therefore, the alternator belt was well beyond its useful service life and was never replaced.
  2. Therefore, the alternator belt is broken.
  3. Therefore, the alternator is not working.
  4. Therefore, the battery is dead.
  5. Therefore, the car will not start. (The Real Problem)
  6. Therefore, I will be late for work.

Does the reverse logic make sense to you?  It seems to fit.  Does it sound like the owner of the car needed to be a mechanic or at least know one?  When it comes to car trouble, we don’t seem too concerned about going to the outside experts (the mechanic) to get it fixed.  Why do some companies fail to recognize that experts also exist outside of their business as well?  In some cases, proprietary or intellectual knowledge would preclude calling in outside resources.  Barring that, some outside expertise can certainly bring a different perspective to the problem at hand.

Caution!  Stick to the Problem – Don’t Assign Blame

The original Wikipedia example identified the root cause as, “I have not been maintaining my car according to the recommended service schedule.”  It would be too easy for someone to say, “Aha, it’s entirely your fault.  If you only took better care of your things you wouldn’t have been in this predicament.”  For this reason, we presented the case based on the facts.  It’s not WHO it’s WHAT.  This approach also tempers emotions and keeps the team focused on the problem and the solution.

Where do we start? Problem Entry Points

You have likely noted that the problem statement is the key to establishing a starting point for the 5 WHY process.  A problem may have different entry points depending on what stage you become involved:

Entry Point Problem Statement
You Late For Work
Service Manager The car will not start
Mechanic The battery is dead
Belt Supplier The alternator belt is broken

Product Recalls and Warranty Returns are typical examples of where you may find multi-level 5 WHYs.  Ultimately the suppliers of most products, like the Belt Supplier in our example, will also complete a 5 WHY.  This is typically the case for most Tier I automotive suppliers.

WHY MAPS / TREES.

The one drawback or downfall of the 5 Why process as presented above and also used by most companies is the suggestion that a single “WHY + Answer” series will evolve into a neat single root cause.  Our experience suggests that this is far from reality.  We typically present the single series as part of the final solution, however, we can assure you that multiple root cause / solution threads were developed before arriving at the final result.

We use the WHY MAP (WHY TREE) as a tool that allows us to pursue multiple thought threads simultaneously.  Pursuing multiple threads also stimulates new ideas and potential causes.  In some cases the root cause analysis threads lead to the same or common root cause.  Then it is a matter of selecting the most likely root cause.

Tip:

Problem solving TREES come under many different names including Why-Tree, Cause Tree, Root Cause Tree, Causal Factor Tree, Why Staircase Tree, and Cause Map to name a few of them.  As you can see from the names, they all serve to create, stimulate, and propagate ideas.

Regardless of the tool you use, finding the true root cause and ultimately the solution to resolve it is the key to your problem solving success.

We trust this post will provide you with some insight to using the 5-WHY approach for problem solving and will serve as a useful tool to improve your OEE.

More on this series to follow in our next post.

Until Next Time – STAY lean!