Tag: Overall Equipement Effectiveness

OEE Measurement Error

How many times have you, or someone you know, challenged the measurement process or method used to collect the data because the numbers just “don’t make sense” or “can’t be right”?

It is imperative to have integrity in the data collection process to minimize the effect of phantom improvements through measurement method changes.  Switching from a manual recording system to a completely automated system is a simple example of a data collection method change that will most certainly generate “different” results.

Every measurement system is subject to error including those used to measure and monitor OEE.  We briefly discussed the concept of variance with respect to actual process throughput and, as you may expect from this post, variance also applies to the measurement system.

Process and measurement stability are intertwined.  A reliable data collection / measurement system is required to establish an effective baseline from which to base your OEE improvement efforts.  We have observed very unstable processes with extreme throughput rates from one shift to the next.  We learned that the variance in many cases is not always the process but in the measurement system itself.

We decided to comment briefly on this phenomenon of measurement error for several reasons:

  1. The reporting systems will naturally improve as more attention is given to the data they generate.
  2. Manual data collection and reporting systems are prone to errors in both recording and data input.
  3. Automated data collection systems substantially reduce the risk of errors and improve data accuracy.
  4. Changes in OEE trends may be attributed to data collection technology not real process changes.

Consider the following:

  1. A person records the time of the down time and reset / start up events by reading a clock on the wall.
  2. A person records the time of the down time event using a wrist watch and then records the reset /start up time using the clock on the wall.
  3. A person uses a stop watch to track the duration of a down time event.
  4. Down time and up time event data are collected and retrieved from a fully automated system that instantly records events in real time.

Clearly, each of the above data collection methods will present varying degrees of “error” that will influence the accuracy of the resulting OEE.  The potential measurement error should be a consideration when attempting to quantify improvement efforts.

Measurement and Error Resolution

The technology used will certainly drive the degree of error you may expect to see.  A clock on the wall may yield an error of +/- 1 minute per event versus an automated system that may yield an error of +/- 0.01 seconds.

The resolution of the measurement system becomes even more relevant when we consider the duration of the “event”.  Consider the effect of measurement resolution and potential error for a down time event having a duration of 5 minutes versus 60 minutes.

CAUTION!

A classic fallacy is “inferred accuracy” as demonstrated by the stop watch measurement method.  Times may be recorded to 1/100th of a second suggesting a high degree of precision in the measurement.  Meanwhile, it may take the operator 10 seconds to locate the stop watch, 15 seconds to reset a machine fault, and 20 seconds to document the event on a “report” and another 10 seconds to return the stop watch to its proper location. 

What are we missing?  How significant is the event and was it worth even recording?  What if one operator records the “duration” after the machine is reset while another operator records the “duration” after documenting and returning the watch to its proper location?

The above example demands that we also consider the event type:  “high frequency-short duration” versus “low frequency-long duration” events.  Both must be considered when attempting to understand the results.

The EVENT is the Opportunity

As mentioned in previous posts, we need to understand what we are measuring and why.  The “event” and methods to avoid recurrence must be the focus of the improvement effort.  The cumulative duration of an event will help to focus efforts and prioritize the opportunities for improvement.

Additional metrics to help “understand” various process events include Mean Response Time, Mean Time Between Failures (MTBF), and Mean Time To Repair (MTTR).  Even 911 calls are monitored from the time the call is received.  The response time is as critical, if not more so, than the actual event, especially when the condition is life-threatening or otherwise self-destructive (fire, meltdown).

An interesting metric is the ratio between Response Time and Mean Time To Repair.  The response time is measured from the time the event occurs to the time “help” arrives.  Our experience suggests that significant improvements can be made simply by reducing the response time.

We recommend training and providing employees with the skills needed to be able to respond to “events” in real time.  Waiting 15 minutes for a technician to arrive to reset a machine fault that required only 10 seconds to resolve is clearly an opportunity.

Many facilities actually hire “semi-skilled” labour or “skilled technicians” to operate machines.  They are typically flexible, adaptable, present a strong aptitude for continual improvement, and readily trained to resolve process events in real time.

Conclusion

Measurement systems of any kind are prone to error.  While it is important to understand the significance of measurement error, it should not be the “primary” focus.  We recommend PREVENTION and ELIMINATION of events that impede the ability to produce a quality product at rate.

Regrettably, some companies are more interested in collecting “accurate” data than making real improvements (measuring for measurements sake). 

WHAT are you measuring and WHY?  Do you measure what you can’t control?  We will leave you with these few points to ponder.

Until next time – STAY Lean!

Advertisements

Problem Solving with OEE – Measuring Success

OEE in Perspective

As mentioned in our previous posts, OEE is a terrific metric for measuring and monitoring ongoing performance in your operation.  However, like many metrics, it can become the focus rather than the gage of performance it is intended to be.

The objective of measuring OEE is to identify opportunities where improvements can be made or to determine whether the changes to your process provided the results you were seeking to achieve.  Lean organizations predict performance expectations and document the reasons to support the anticipated results .  The measurement system used to monitor performance serves as a gauge to determine whether the reasons for the actual outcomes were valid.  A “miss” to target indicates that something is wrong with the reasoning – whether the result is positive or negative.

Lean organizations are learning continually and recognize the need to understand why and how processes work.  Predicting results with supported documentation verifies the level of understanding of the process itself.  Failing to predict the result is an indicator that the process is not yet fully understood.

Problem Solving with OEE

Improvement strategies that are driven by OEE should cause the focus to shift to specific elements or areas in your operation such as reduction in tool change-over or setup time, improved material handling strategies, or quality improvement initiatives.  Focusing on the basic tenets of Lean will ultimately lead to improvements in OEE.  See the process in operation (first-hand), identify opportunities for improvement, immediately resolve,  implement and document corrective actions, then share the knowledge with the team and the company.

Understanding and Managing Variance:

OEE data is subject to variation like any other process in your operation.  What are the sources of variation?  If there is a constant effort to improve performance, then you would expect to see positive performance trends.  However, monitoring OEE and attempting to maintain positive performance trends can be a real challenge if the variances are left unchecked.

Availability

What if change-over times or setup times have been dramatically reduced?  Rather than setting a job to run once a week, it has now been decided to run it daily (five times per week).  What if the total downtime was the same to make the same number of parts over the same period of time?  Did we make an improvement?

The availability factor may very well be the same.  We would suggest that, yes, a signficant improvement was made.  While the OEE may remain the same, the inventory turns may increase substantially and certainly the inventory on hand could be converted into sales much more readily.  So, the improvement will ultimately be measured by a different metric.

Performance

Cycle time reductions are typically used to demonstrate improvements in the reported OEE.  In some cases, methods have been changed to improve the throughput of the process, in other cases the process was never optimized from the start.  In other instances, parts are run on a different and faster machine resulting in higher rates of production.  The latter case does not necessarily mean the OEE has improved since the base line used to measure it has changed.

Quality

Another example pertains to manual operations ultimately controlled through human effort.  The standard cycle time for calculating OEE is based on one operator running the machine.  In an effort to improve productivity, a second operator is added.  The performance factor of the operation may improve, however, the conditions have changed.  The perceived OEE improvement may not be an improvement at all.  Another metric such as Labour Variance or Efficiency may actually show a decline.

Another perceived improvement pertains to Quality.  Hopefully there aren’t to many examples like this one – changing the acceptance criteria to allow more parts to pass as acceptable, fit for function, or saleable product (although it is possible that the original standards were too high).

Standards

Changing standards is not the same as changing the process.  Consider another more obvious example pertaining to availability.  Assume the change over time for a process is 3o minutes and the total planned production time is 1 hour (including change over time).  For simplicity of the calculation no other downtime is assumed.  The availability in this case is 50% ((60 – 30) / 60).

To “improve” the availability we could have run for another hour and the resulting availability would be 75% (120 – 30) / 120.  The availability will show an improvement but the change-over process itself has not changed.  This is clearly an example of time management, perhaps even inventory control, not process change.

This last example also demonstrates why comparing shifts may be compromised when using OEE as a stand-alone metric.  What if one shift completed the setup in 20 minutes and could only run for 30 minutes before the shift was over (Availability = 60%).  The next shift comes in and runs for 8 hours without incident or down time (Availability = 100%).  Which shift really did a better job all other factors being equal?

Caution

When working with OEE, be careful how the results are used and certainly consider how the results could be compromised if the culture has not adopted the real meaning of Lean Thinking.  The metric is there to help you improve your operation – not figure out ways to beat the system!

FREE Downloads

We are currently offering our Excel OEE Spreadsheet Templates and example files at no charge.  You can download our files from the ORANGE BOX on the sidebar titled “FREE DOWNLOADS” or click on the FREE Downloads Page.  These files can be used as is and can be easily modified to suit many different manufacturing processes.  There are no hidden files, formulas, or macros and no obligations for the services provided here.

Please forward your questions, comments, or suggestions to LeanExecution@gmail.com.  To request our services for a specific project, please send your inquiries to Vergence.Consulting@gmail.com.

We welcome your feedback and thank you for visiting.

Until Next Time – STAY Lean!

"Click"