Category: Process Control and OEE

Are You Suffering from Fragmentation?

This image shows the life cycle of a task by u...
Task Life Cycle - Image via Wikipedia.

When Toyota arrived on the North American manufacturing scene, automakers were introduced to many of Toyota’s best practices including the Toyota Production System (TPS) and the well-known “Toyota Way”.  Since that time, Toyota’s best practices have been introduced to numerous other industries and service providers with varying degrees of success.

In simple terms, Toyota’s elusive goal of single piece flow implicitly demands that parts be processed one piece at a time and only as required by the customer.  The practice of batch processing was successfully challenged and proven to be inefficient as the practice inherently implies a certain degree of fragmentation of processes, higher inventories, longer lead times, and higher costs.

To the contrary, over specialization can lead to excessive process fragmentation and is evidenced by decreased efficiency, higher labour costs, and increased lead times.  In other words, we must concern ourselves with assuring that we have optimized process tasks to the extent that maximum flow is achieved in the shortest amount of time.

An example of excessive specialization can be found in the healthcare system here in Ontario, Canada.  Patients visit their family doctor only to be sent to a specialist who in turn prescribes a series of tests to be completed by yet another layer of “specialists”.  To complicate matters even more, each of  these specialized services are inconveniently separated geographically as well.

Excessive fragmentation can be found by conducting a thorough review of the entire process.  The review must consider the time required to perform “real value added” tasks versus non-value added tasks as well as the time-lapse that may be incurred between tasks.  Although individual “steps” may be performed efficiently and within seconds, minutes, or hours, having to wait several days, weeks, or even months between tasks clearly undermines the efficiency of the process as a whole.

In the case of healthcare, the time lapse between visits or “tasks” is borne by the patient and since the facilities are managed independently, wait times are inherently extended.  Manufacturers suffer a similar fate where outside services are concerned.  Localization of services is certainly worthy of consideration when attempting to reduce lead times and ultimately cost.

Computers use de-fragmentation software to “relocate” data in a manner that facilitates improved file storage and retrieval.  If only we could “defrag” our processes in a similar way to improve our manufacturing and service industries.  “Made In China” labels continue to appear on far too many items that could be manufactured here at home!

Until Next Time – Stay LEAN!

Vergence Analytics

Advertisements

OEE in an imperfect world

A selection of Normal Distribution Probability...
Image via Wikipedia

Background: This is a more general presentation of “Variation:  OEE’s Silent Partner” published on January 31, 2011.

In a perfect world we can produce quality parts at rate, on time, every time.  In reality, however, all aspects of our processes are subject to variation that affects each factor of Overall Equipment Effectiveness:  Availability, Performance, and Quality.

Our ability to effectively implement Preventive Maintenance programs and Quality Management Systems is reflected in our ability to control and improve our processes, eliminate or reduce variation, and increase throughput.

The Variance Factor

Every process and measurement is subject to variation and error.  It is only reasonable to expect metrics such as Overall Equipment Effectiveness and Labour Efficiency will also exhibit variance.  The normal distribution for four (4) different data sets are represented by the graphic that accompanies this post.  You will note that the average for 3 of the curves (Blue, Red, and Yellow) is common (u = 0) and the shapes of the curves are radically different.  The green curve shows a normal distribution that is shifted to the left, the average (u) is -2, although we can see that the standard deviation for this distribution is better than that of the yellow and red curves.

The graphic also allows us to see the relationship between the Standard Deviation and the shape of curve.  As the Standard Deviation increases, the height decreases and the width increases.  From these simple representations, we can see that our objective is to reduce to the standard deviation.  The only way to do this is to reduce or eliminate variation in our processes.

We can use a variety of statistical measurements to help us determine or describe the amount of variation we may expect to see.  Although we are not expected to become experts in statistics, most of us should already be familiar with the normal distribution or “bell curve” and terms such as Average, Range, Standard Deviation, Variance, Skewness, and Kurtosis.  In the absence of an actual graphic, these terms help us to picture what the distribution of data may look like in our mind’s eye.

Run Time Data

The simplest common denominator and readily available measurement for production is the quantity of good parts produced.  Many companies have real-time displays that show quantity produced and in some cases go so far as to display Overall Equipment Effectiveness (OEE) and it’s factors – Availability, Performance, and Quality.  While the expense of live streaming data displays can be difficult to justify, there is no reason to abandon the intent that such systems bring to the shop floor.   Equivalent means of reporting can be achieved using “whiteboards” or other forms of data collection.

I am concerned with any system that is based solely on cumulative shift or run data that does not include run time history.  As such, an often overlooked opportunity for improvement is the lack of stability in productivity or throughput over the course of the run.  Systems with run time data allow us to identify production patterns, significant swings in throughput, and to correlate this data with down time history.  This production story board allows us to analyze sources of instability, identify root causes, and implement timely and effective corrective actions.  For processes where throughput is highly unstable, I recommend a direct hands-on review on the shop floor in lieu of post production data analysis.

Overall Equipment Effectiveness

Overall Equipment Effectiveness and the factors Availability, Performance, and Quality do not adequately or fully describe the capability of the production process.  Reporting on the change in standard deviation as well as OEE provides a more meaningful understanding of the process  and its inherent capability.

Improved capability also improves our ability to predict process throughput.  Your materials / production control team will certainly appreciate any improvements to stabilize process throughput as we strive to be more responsive to customer demand and reduce inventories.

Until Next Time – STAY lean!

Vergence Analytics

Twitter:  @Versalytics

OEE: The Means to an End – Differentiation Where It Matters Most

A pit stop at the Autrodomo Nazionale of Monza...
Image via Wikipedia

Does your organization focus on results or the means to achieve them?  Do you know when you’re having a good day?  Are your processes improving?

The reality is that too many opportunities are missed by simply focusing on results alone.  As we have discussed in many of our posts on problem solving and continuous improvement, the actions you take now will determine the results you achieve today and in the future. Focus on the means of making the product and the results are sure to follow.

Does it not make sense to measure the progress of actions and events in real-time that will affect the end result? Would it not make more sense to monitor our processes similar to the way we use Statistical Process Control techniques to measure current quality levels?  Is it possible to establish certain “conditions” that are indicative of success or failure at prescribed intervals as opposed to waiting for the run to finish?

By way of analogy, consider a team competing in a championship race.  While the objective is to win the race, we can be certain that each lap is timed to the fraction of a second and each pit stop is scrutinized for opportunities to reduce time off the track.  We can also be sure that fine tuning of the process and other small corrections are being made as the race progresses.  If performed correctly and faster than the competition, the actions taken will ultimately lead to victory.

Similarly, does it not make sense to monitor OEE in realtime? If it is not possible or feasible to monitor OEE itself , is it possible to measure the components – Availability, Performance, and Quality – in real-time?  I would suggest that we can.

Performance metrics may include production and quality targets based on lapsed production time. If the targets are hit at the prescribed intervals, then the desired OEE should also be realized.  If certain targets are missed, an escalation process can be initiated to involve the appropriate levels of support to immediately and effectively resolve the concerns.

A higher reporting frequency or shorter time interval provides the opportunity to make smaller (minor) corrections in real-time and to capture relevant information for events that negatively affect OEE.

Improving OEE in real-time requires a skilled team that is capable of trouble shooting and solving problems in real-time. So, resolving concerns and making effective corrective actions in real-time is as important to improving OEE than the data collection process itself.

A lot of time, energy, and resources are expended to collect and analyze data. Unfortunately, when the result is finalized, the opportunity to change it is lost to history.  The absence of event-driven data collection and after the fact analysis leads to greater speculation regarding the events that “may have” occurred versus those events that actually did.

Clearly, an end of run pathology is more meaningful when the data supporting the run represents the events as they are recorded in real-time when they actually occurred.  This data affords a greater opportunity to dissect the events themselves and delve into a deeper analysis that may yield opportunities for long-term improvements.

Set yourself apart from the competition.  Focus on the process while it is running and make improvements in real-time.  The results will speak for themselves.

Your feedback matters

If you have any comments, questions, or topics you would like us to address, please feel free to leave your comment in the space below or email us at feedback@leanexecution.ca or feedback@versalytics.com.  We look forward to hearing from you and thank you for visiting.

Until Next Time – STAY lean

Versalytics Analytics
 

How Effective is Your Problem Solving?

The re-drawn chart comparing the various gradi...
Image via Wikipedia

Background

Of the many metrics that we use to manage our businesses, one area that is seldom measured is the effectiveness of the problem solving process itself.  We often engage a variety of problem solving tools such as 5-Why, Fishbone Diagrams, Fault Trees, Design of Experiments (DOE), or other forms of Statistical Analysis in our attempts to find an effective solution and implement permanent corrective actions.

Unfortunately, it is not uncommon for problems to persist even after the “fix” has been implemented.  Clearly, if the problem is recurring, either the problem was not adequately defined, the true root cause was not identified and verified correctly, or the corrective action (fix) required to address the root cause is inadequate.  While this seems simple enough, most lean practitioners recognize that solving problems is easier said than done.

Customers demand and expect defect free products and services from their suppliers.  To put it in simple terms, the mission for manufacturing is to:  “Safely produce a quality part at rate, delivered on time, in full.”  Our ability to attain the level of performance demanded by our mission and our customers is dependent on our ability to efficiently and effectively solve problems.

Metrics commonly used to measure supplier performance include Quality Defective Parts Per Million (PPM), Incident Rates, and Delivery Performance.  Persisting negative performance trends and repeat occurrences are indicative of ineffective problem solving strategy.  Our ability to identify and solve problems efficiently and effectively increases customer confidence and minimizes product and business risks.

Predictability

One of the objectives of your problem solving activities should be to predict or quantify the expected level of improvement.   The premise for predictability introduces a nuance of accountability to the problem solving process that may otherwise be non-existent.  In order to predict the outcome, the team must learn and understand the implications of the specific improvements they are proposing and to the same extent what the present process state is lacking.

To effectively solve a problem requires a thorough understanding of the elements that comprise the ideal state required to generate the desired outcome.  From this perspective, it is our ability to discern or identify those items that do not meet the ideal state condition and address them as items for improvement.  If each of these elements could also be quantified in terms of contribution to the ideal state, then a further refinement in predictability can be achieved.

The ability to predict an outcome is predicated on the existence of a certain level of “wisdom”, knowledge, or understanding whereby a conclusion can be formulated.

Plan versus Actual

Measuring the effectiveness of the problem solving process can be achieved by comparing Planned versus Actual results. The ability to predict or plan for a specific result suggests an implicit level of prior knowledge exists to support or substantiate the outcome.

Fundamentally, the benefits of this methodology are three-fold as it measures:

  • How well we understand the process itself,
  • Our ability to adequately define the problem and effectively identify the true root cause, and
  • The effectiveness of solution.

Another benefit of this methodology is the level of inherent accountability.  Specific performance measurements demand a greater degree of integrity in the problem solving process and accountability is a self-induced attribute of most participants.

The ability for a person or team to accurately define, solve, and implement an effective solution with a high degree of success also serves as a measure of the individual’s or team’s level of understanding of that process.  From another perspective, it may serve as a measure of knowledge and learning yet to be acquired.

As you may expect, this strategy is not limited to solving quality problems and can be applied to any system or process.  This type of measurement system is used by most manufacturing facilities to measure planned versus actual parts produced and is directly correlated to overall equipment effectiveness or OEE.

Any company working in the automotive manufacturing sector recognizes that this methodology is an integral part of Toyota’s operating philosophy and for good reason.  As a learning organization, Toyota fully embraces opportunities to learn from variances to plan.

Performance expectations are methodically evaluated and calculated before engaging the resources of the company.  It is important to note that exceeding expectations is as much a cause for concern as falling short.  Failing to meet the planned target (high / low or over / under) indicates that a knowledge gap still exists.  The objective is to revisit the assumptions of the planning model and to learn where adjustments are required to generate a predictable outcome.

Steven Spear discusses these key attributes that differentiate industry leaders from the rest of the pack in his book titled The High Velocity Edge.

First Time Through Quality (FTQ)

FTQ can also be applied to problem solving efforts by measuring the number of iterations that were required before the final solution was achieved.  Just as customers have zero tolerance for repeat occurrences, we should come to expect the same level of performance and accountability from our internal resources.

Although the goal may be to achieve a 100% First Time Through Solution rate, be wary of Paralysis by Analysis while attempting to find the perfect solution.  The objective is to enhance the level of understanding of the problem and the intended solution not to bring the flow of ideas to a halt.  Too often, activity is confused with action.  To affect change, actions are required.  The goal is to implement effective, NOT JUST ANY, solutions.

Jishuken

Literally translated, Jishuken means “Self-Study”.  Prior to engaging external company resources, the person requesting a Jishuken event is expected to demonstrate that they have indeed become students of the process by learning and demonstrating their knowledge of the process or problem.  It pertains to the collaborative problem solving strategy after all internal efforts have been exhausted and external resources are deployed with ”fresh eyes” to share knowledge and attempt to achieve resolution.  While the end result does not appear to be “self study”, the prerequisite for Jishuken is “exhausting all internal efforts”.  In other words, the facility requesting outside resources must first strive to become experts themselves.

Summary

Many companies limit their formal problem solving activities to the realm of quality and traditional problem solving tools are only used when non-conforming or defective product has been reported by the customer.  Truly agile / lean companies work ahead of the curve and attempt to find a cure before a problem becomes a reality at the customer level.

With this in mind, it stands to reason that any attempt to improve Overall Equipment Effectiveness or OEE also requires some form of problem solving that, in turn, can affect a positive change to one or all of the components that comprise OEE:  Availability, Performance, and First Time Through Quality.

As a reminder, OEE is the product of Availability (A) x Performance (P) x Quality (Q) and measures how effectively the available (scheduled) time was used to produce a quality product.  To get your free OEE tutorial or any one of our OEE templates, visit our Free Downloads page or pick the files you want from our free downloads box in the side bar.  You can easily customize these templates to suit your specific process or operation.

Many years ago I read a quote that simply stated,

“The proof of wisdom is in the results.”

And so it is.

Until Next Time – STAY lean!

Vergence Analytics

Differentiation Strategies and OEE (Part II): The Heart of the Matter

An article published in Industry Week magazine comprises part of our pursuit of differentiation strategies and OEE.  This will serve as the topical element of our post for today.

Enjoy the article, OEE:  The heart of the matter, and we’ll provide our thoughts and insights as well.  If the above links do not work, you can copy and paste the following link into your browser:

http://www.industryweek.com/articles/oee_the_heart_of_the_matter_18211.aspx

Until Next Time – STAY lean!

Vergence Analytics

Differentiation Strategies and OEE (Part I)

If your competitors are using OEE to manage their processes just like you, how do you plan to establish a unique approach that helps your company to outperform them?  You may be surprised to learn that the perception of process management and control created by high technology OEE solutions can quickly be shattered by the lack of specific process level data collection and analysis.

A recent visit to a high volume production facility was typical of most operations with all the latest technology to show FTQ (First Time Quality), OA (Operational Availabillity), Plan versus Actual rates, and plasma process displays showing fault locations.  We were disappointed and surprised to learn that the collection of raw data at the process level was extremely limited and required personnel to manually record data using paper tracking sheets.  While the displays were high technology, the infrastructure required to support data collection and provide meaningful analytics in real time did not exist.

In contrast, we have also toured facilities that have highly integrated data collection and analysis to support their OEE systems in real time.  Highly evolved and integrated OEE systems actually allow personnel to use the system to help them manage their processes as opposed to the first case where personnel are wondering whether anyone is even looking at the “numbers”.

From this perspective, the level of OEE integration in your organization could be a defining attribute that differentiates your company from your competitors.  This is easily demonstrated by showing how your data is used to manage and improve your results in real time versus others who are looking at the results only to scratch their heads and wonder what is going on.  The success of your integration is also directly correlated to the training provided to your team – at all levels.

There is more to differentiation than being the same but different.

Until Next Time – STAY lean!


Vergence Analytics

22 Seconds to Burn – Excel VBA Teaches Lean Execution

Cover of "Excel 2003 Power Programming wi...
Cover via Amazon

Background:

VBA for Excel has once again provided the opportunity to demonstrate some basic lean tenets.  The methods used to produce the required product or solution can yield significant savings in time and ultimately money.  The current practice is not necessarily the best practice in your industry.  In manufacturing, trivial or minute differences in methods deployed become more apparent during mass production or as volume and demand increases.  The same is true for software solutions and both are subject to continual improvement and the relentless pursuit to eliminate waste.

Using Excel to demonstrate certain aspects of Lean is ideal.  Numbers are the raw materials and formulas represent the processes or methods to produce the final solution (or product).  Secondly, most businesses are using Excel to manage many of their daily tasks.  Any extended learning can only help users to better understand the Excel environment.

The Model:

We recently created a perpetual Holiday calendar for one of our applications and needed an algorithm or procedure to calculate the date for Easter Sunday and Good Friday.  We adopted an algorithm found on Wikipedia at http://en.wikipedia.org/wiki/Computus that produces the correct date for Easter Sunday.

In our search for the Easter Algorithm, we found another algorithm that uses a different method of calculation and provides the correct results too.  Pleased to have two working solutions, we initially did not spend too much time thinking about the differences between them.  If both routines produce the same results then we should choose the one with the faster execution time.  We performed a simple time study to determine the most efficient formula.  For a single calculation, or iteration, the time differences are virtually negligible; however, when subjected to 5,000,000 iterations the time differences were significant.

This number of cycles may seem grossly overstated, however, when we consider how many automobiles and components are produced each year then 5,000,000 approaches only a fraction of the total volume.  Taken further, Excel performs thousands of calculations a day and perhaps even as many more times this rate as numbers or data are entered on a spreadsheet.  When we consider the number “calculations” performed at any given moment, the number quickly grows beyond comprehension.

Testing:

As a relatively new student to John Walkenbach’s book, “Excel 2003 Power Programming with VBA“, speed of execution, efficiency, and “Declaring your Variables” have entered into our world of Lean.  We originally created two (2) routines called EasterDay and EasterDate.  We then created a simple procedure to run each function through 5,000,000 cycles.  Again, this may sound like a lot of iterations but computers work at remarkable speeds and we wanted enough resolution to discern any time differences between the routines.

The difference in the time required to execute 5,000,000 cycles by each of the routines was surprising.  We recorded the test times (measured in seconds) for three separate studies as follows:

  • Original EasterDay:  31.34,  32.69,  30.94
  • Original EasterDate:  22.17,  22.28,  22.25

The differences between the two methods ranged from 9.17 seconds to 8.69 seconds.  Expressed in different terms, the duration of the EasterDay routine is 1.39 to 1.46 times longer than EasterDate.  Clearly the original EasterDate function has the better execution speed.  What we perceive as virtually identical systems or processes at low volumes can yield significant differences that are often only revealed or discovered by increased volume or the passage of time.

In the Canadian automotive industry there are at least 5 major OEM manufacturers (Toyota, Honda, Ford, GM, and Chrysler), each producing millions of vehicles a year.  All appear to produce similar products and perform similar tasks; however, the performance ratios for each of these companies are starkly different.  We recognize Toyota as the high velocity, lean, front running company.  We contend that Toyota’s success is partly driven by the inherent attention to detail of processes and product lines at all levels of the company.

Improvements

We decided to revisit the Easter Day calculations or procedures to see what could be done to improve the execution speed.  We created a new procedure called “EasterSunday” using the original EasterDay procedure as our base line.  Note that the original Wikipedia code was only slightly modified to work in VBA for Excel.  To adapt the original Wikipedia procedure to Excel, we replaced the FLOOR function with the INT function in VBA.  Otherwise, the procedure is presented without further revision.

To create the final EasterSunday procedure, we made two revisions to the original code without changing the algorithm structure or the essence of the formulas themselves.  The changes resulted in significant performance improvements as summarized as follows:

  1. For integer division, we replaced the INT (n / d) statements with a less commonly used (or known) “\” integer division operator.  In other words, we used “n \ d” in place of “INT( n / d)” wherever an integer result is required.  This change alone resulted in a gain of 11 seconds.  One word of caution if you plan to use the “\” division operator:  The “n” and “d”  are converted to integers before doing the division.
  2. We declared each of the variables used in the subsequent formulas and gained yet another remarkable 11 seconds.  Although John Walkenbach and certainly many other authors stress declaring variables, it is surprising to see very few published VBA procedures that actually put this to practice.

Results:

The results of our Time Tests appear in the table below.  Note that we ran several timed iterations for each change knowing that some variations in process time can occur.

EasterDay = 31.34375 Original Code uses INT( n / d ) to convert Division Results
EasterSunday = 20.828125 1.  Replaced INT ( n / d) with (n \ d)
EasterDate = 22.28125 Original Code – Alternate Calculation Method
Re-Test to Confirm Timing
EasterDay = 30.9375 Original Code uses INT( n / d ) to convert Division Results
EasterSunday = 20.921875 1.  Replaced INT ( n / d) with (n \ d)
EasterDate = 22.25 Original Code – Alternate Calculation Method
Re-Test to Confirm Timing
EasterDay = 30.90625 Original Code uses INT( n / d ) to convert Division Results
EasterSunday = 21.265625 1.  Replaced INT ( n / d) with (n \ d)
EasterDate = 22.25 Original Code – Alternate Calculation Method
Re-Test to Confirm Timing
EasterDay = 31.078125 Original Code uses INT( n / d ) to convert Division Results
EasterSunday = 9.171875 2.  Variables DECLARED!
EasterDate = 22.1875 Original Code – Alternate Calculation Method
Re-Test to Confirm Timing
EasterDay = 31.109375 Original Code uses INT( n / d ) to convert Division Results
EasterSunday = 9.171875 2.  Variables DECLARED!
EasterDate = 22.171875 Original Code – Alternate Calculation Method

The EasterSunday procedure contains the changes described above.  We achieved a total savings of approximately 22 seconds.  The integer division methods used both yield the same result, however, one is clearly faster than the other.

The gains made by declaring variables were just as significant.  In VBA, undeclared variables default to a “variant” type.  Although variant types are more flexible by definition, performance diminishes significantly. We saved at least an additional 11 seconds simply by declaring variables.  Variable declarations are to VBA as policies are to your company, they define the “size and scope” of the working environment.  Undefined policies or vague specifications create ambiguity and generate waste.

Lessons Learned:

In manufacturing, a 70% improvement is significant; worthy of awards, accolades, and public recognition.  The lessons learned from this example are eight-fold:

  1. For manufacturing, do not assume the current working process is the “best practice”.  There is always room for improvement.  Make time to understand and learn from your existing processes.  Look for solutions outside of your current business or industry.
  2. Benchmarking a current practice against another existing practice is just the incentive required to make changes.  Why is one method better than another?  What can we do to improve?
  3. Policy statements can influence the work environment and execution of procedures or methods.  Ambiguity and lack of clarity create waste by expending resources that are not required.
  4. Improvements to an existing process are possible with results that out perform the nearest known competitor.  We anticipated at least being able to have the two routines run at the similar speeds.  We did not anticipate the final EasterSunday routine to run more than 50% faster than our simulated competitive benchmark (EasterDate).
  5. The greatest opportunities are found where you least expect them.  Learning to see problems is one of the greatest challenges that most companies face.  The example presented in this simple analogy completely shatters the expression, “If it ain’t broke, don’t fix it.”
  6. Current practices are not necessarily best practices and best practices can always be improved.  Focusing on the weaknesses of your current systems or processes can result in a significant competitive edge.
  7. Accelerated modeling can highlight opportunities for improvement that would otherwise not be revealed until full high volume production occurs.  Many companies are already using process simulation software to emulate accelerated production to identify opportunities for improvement.
  8. The most important lesson of all is this:

Speed of Execution is Important >> Thoughtful Speed of Execution is CRITICAL.

We wish you all the best of this holiday season!

Until Next Time – STAY Lean!

Vergence Analytics

At the onset of the Holiday project, the task seemed relatively simple until we discovered that the rules for Easter Sunday did not follow the simple rules that applied to other holidays throughout the year.  As a result we learned more about history, astronomy, and the tracking of time than we ever would have thought possible.

We also learned that Excel’s spreadsheet MOD formula is subject to precision errors and the VBA version of MOD can yield a different result than the spreadsheet version.

We also rediscovered Excel’s Leap Year bug (29-Feb-1900).   1900 was not a leap year.  The leap year bug resides in the spreadsheet version of the date functions.  The VBA date function recognizes that 29-Feb-1900 is not a valid date.