Category: Performance

Strategies to improve Performance include Value Analysis / Value Engineering, Standardized Work, and Cycle Time Reduction through process improvement

Are You Suffering from Fragmentation?

This image shows the life cycle of a task by u...
Task Life Cycle - Image via Wikipedia.

When Toyota arrived on the North American manufacturing scene, automakers were introduced to many of Toyota’s best practices including the Toyota Production System (TPS) and the well-known “Toyota Way”.  Since that time, Toyota’s best practices have been introduced to numerous other industries and service providers with varying degrees of success.

In simple terms, Toyota’s elusive goal of single piece flow implicitly demands that parts be processed one piece at a time and only as required by the customer.  The practice of batch processing was successfully challenged and proven to be inefficient as the practice inherently implies a certain degree of fragmentation of processes, higher inventories, longer lead times, and higher costs.

To the contrary, over specialization can lead to excessive process fragmentation and is evidenced by decreased efficiency, higher labour costs, and increased lead times.  In other words, we must concern ourselves with assuring that we have optimized process tasks to the extent that maximum flow is achieved in the shortest amount of time.

An example of excessive specialization can be found in the healthcare system here in Ontario, Canada.  Patients visit their family doctor only to be sent to a specialist who in turn prescribes a series of tests to be completed by yet another layer of “specialists”.  To complicate matters even more, each of  these specialized services are inconveniently separated geographically as well.

Excessive fragmentation can be found by conducting a thorough review of the entire process.  The review must consider the time required to perform “real value added” tasks versus non-value added tasks as well as the time-lapse that may be incurred between tasks.  Although individual “steps” may be performed efficiently and within seconds, minutes, or hours, having to wait several days, weeks, or even months between tasks clearly undermines the efficiency of the process as a whole.

In the case of healthcare, the time lapse between visits or “tasks” is borne by the patient and since the facilities are managed independently, wait times are inherently extended.  Manufacturers suffer a similar fate where outside services are concerned.  Localization of services is certainly worthy of consideration when attempting to reduce lead times and ultimately cost.

Computers use de-fragmentation software to “relocate” data in a manner that facilitates improved file storage and retrieval.  If only we could “defrag” our processes in a similar way to improve our manufacturing and service industries.  “Made In China” labels continue to appear on far too many items that could be manufactured here at home!

Until Next Time – Stay LEAN!

Vergence Analytics

Advertisements

Are you Winning? A Hockey Lesson for Lean Metrics.

Toronto Maple Leafs
Image via Wikipedia

The world of sports is rife with statistics and hockey is no exception, especially here in Canada.  Over the past few weeks, local Toronto hockey fans anxiously watched or listened for the results of every Toronto Maple Leafs hockey game – all the while hoping for a win and a shot at making into the playoffs.

As has been the case for the past many years, the Leaf’s contention for a playoff spot was equally dependent on their own performance and that of their competitors.  The Leaf’s finally started to win games as did their competitors.  In the end they didn’t make it.

It is interesting to note that, despite their lack luster record, the Maple Leafs are one of the top franchises in the National Hockey League (NHL).  Thanks to the Toronto Star, I learned that we have 45 reasons to hope.

What is the lesson here?

All the statistics or metrics in the world won’t change the final outcome for the Toronto Maple Leafs and neither will any of the excuses for their poor performance.

These players are paid professionals, hired for the specific purpose of contributing to the overall performance of their team to win hockey games.  In the end, no one cares about player performance data, injuries, shots on net, penalties, goals against, or any other metric.

To me it really comes down to one question:

Are you Winning?

The answer to this question is either Yes or No.  There is no room for excuses or “it depends”.  You either know or you don’t.  In hockey, it’s easy.  The metric that matters is the final score at the end of the game.

We are all paid to peform – excuses don’t count.  Determine which metric defines winning performance and be ready when someone asks:

Are you Winning?

As any rainmaker knows, customers expect a quality, low cost product or solution, delivered on time, and in full.  To do anything less is inexcusable.

Until Next Time – STAY lean!

Vergence Analytics
Twitter:  @Versalytics

Scorecards and Dashboards

Interior of the 2008 Cadillac CTS (US model sh...
Image via Wikipedia

I recently published, Urgent -> The Cost of Things Gone Wrong, where I expressed concern for dashboards that are attempting to do too much.  In this regard, they become more of a distraction instead of serving the intended purpose of helping you manage your business or processes.  To be fair, there are at least two (2) levels of data management that are perhaps best differentiated by where and how they are used:  Scorecards and Dashboards.

I prefer to think of Dashboards as working with Dynamic Data.  Data that changes in real-time and influences our behaviors similar to the way the dashboard in our cars work to communicate with us as we are driving.  The fuel gauge, odometer, two trip meters, tachometer, speedometer, digital fuel consumption (L/100 km), and km remaining are just a few examples of the instrumentation available to me in my Mazda 3.

While I appreciate the extra instrumentation, the two that matter first and foremost are the speedometer and the tachometer (since I have a 5 speed manual transmission).  The other bells and whistles do serve a purpose but they don’t necessarily cause me to change my driving behavior.  Of note here is that all of the gauges are dynamic – reporting data in real time – while I’m driving.

A Scorecard on the other hand is a periodic view of summary data and from our example may include Average Fuel Consumption, Average Speed, Maximum Speed, Average Trip, Maximum Trip, Total Miles Traveled and so on.  The scorecard may also include other items such as driving record / vehicle performance data such as Parking Tickets, Speeding Tickets, Oil Changes, Flat Tires, Emergency and Preventive Maintenance.

One of my twitter connections, Bob Champagne (@BobChampagne), published an article titled, Dashboards Versus Scorecards- Its all about the decisions it facilitates…, that provides some great insights into Scorecards and Dashboards.  This article doesn’t require any further embellishment on my part so I encourage you to click here or paste the following link into your browser:  http://wp.me/p1j0mz-6o.  I trust you will find the article both informative and engaging.

Next Steps:

Take some time to review your current metrics.  What metrics are truly influencing your behaviors and actions?  How are you using your metrics to manage your business?  Are you reacting to trends or setting them?

It’s been said that, “What gets measured gets managed.”  I would add – “to a point.”  It simply isn’t practical or even feasible to measure everything.  I say, “Measure to manage what matters most”.

Remember to get your free Excel Templates for OEE by visiting our downloads page or the orange widget in the sidebar.  You can follow us on twitter as well @Versalytics.

Until Next Time – STAY lean!

Vergence Analytics

OEE in an imperfect world

A selection of Normal Distribution Probability...
Image via Wikipedia

Background: This is a more general presentation of “Variation:  OEE’s Silent Partner” published on January 31, 2011.

In a perfect world we can produce quality parts at rate, on time, every time.  In reality, however, all aspects of our processes are subject to variation that affects each factor of Overall Equipment Effectiveness:  Availability, Performance, and Quality.

Our ability to effectively implement Preventive Maintenance programs and Quality Management Systems is reflected in our ability to control and improve our processes, eliminate or reduce variation, and increase throughput.

The Variance Factor

Every process and measurement is subject to variation and error.  It is only reasonable to expect metrics such as Overall Equipment Effectiveness and Labour Efficiency will also exhibit variance.  The normal distribution for four (4) different data sets are represented by the graphic that accompanies this post.  You will note that the average for 3 of the curves (Blue, Red, and Yellow) is common (u = 0) and the shapes of the curves are radically different.  The green curve shows a normal distribution that is shifted to the left, the average (u) is -2, although we can see that the standard deviation for this distribution is better than that of the yellow and red curves.

The graphic also allows us to see the relationship between the Standard Deviation and the shape of curve.  As the Standard Deviation increases, the height decreases and the width increases.  From these simple representations, we can see that our objective is to reduce to the standard deviation.  The only way to do this is to reduce or eliminate variation in our processes.

We can use a variety of statistical measurements to help us determine or describe the amount of variation we may expect to see.  Although we are not expected to become experts in statistics, most of us should already be familiar with the normal distribution or “bell curve” and terms such as Average, Range, Standard Deviation, Variance, Skewness, and Kurtosis.  In the absence of an actual graphic, these terms help us to picture what the distribution of data may look like in our mind’s eye.

Run Time Data

The simplest common denominator and readily available measurement for production is the quantity of good parts produced.  Many companies have real-time displays that show quantity produced and in some cases go so far as to display Overall Equipment Effectiveness (OEE) and it’s factors – Availability, Performance, and Quality.  While the expense of live streaming data displays can be difficult to justify, there is no reason to abandon the intent that such systems bring to the shop floor.   Equivalent means of reporting can be achieved using “whiteboards” or other forms of data collection.

I am concerned with any system that is based solely on cumulative shift or run data that does not include run time history.  As such, an often overlooked opportunity for improvement is the lack of stability in productivity or throughput over the course of the run.  Systems with run time data allow us to identify production patterns, significant swings in throughput, and to correlate this data with down time history.  This production story board allows us to analyze sources of instability, identify root causes, and implement timely and effective corrective actions.  For processes where throughput is highly unstable, I recommend a direct hands-on review on the shop floor in lieu of post production data analysis.

Overall Equipment Effectiveness

Overall Equipment Effectiveness and the factors Availability, Performance, and Quality do not adequately or fully describe the capability of the production process.  Reporting on the change in standard deviation as well as OEE provides a more meaningful understanding of the process  and its inherent capability.

Improved capability also improves our ability to predict process throughput.  Your materials / production control team will certainly appreciate any improvements to stabilize process throughput as we strive to be more responsive to customer demand and reduce inventories.

Until Next Time – STAY lean!

Vergence Analytics

Twitter:  @Versalytics

Variance – OEE’s Silent Partner (Killer)

Example of two sample populations with the sam...
Image via Wikipedia

I was recently involved in a discussion regarding the value of Overall Equipment Effectiveness (OEE).  Of course, I fully supported OEE and confirmed that it can bring tremendous value to any organization that is prepared to embrace it as a key metric.  I also qualified my response by stating that OEE cannot be managed in isolation:

OEE and it’s intrinsic factors, Availability, Performance, and Quality are summary level indices and do not measure or provide any indication of process stability or capability

As a top level metric, OEE does not describe or provide a sense of actual run-time performance.  For example, when reviewing Availability, we have no sense of duration or frequency of down time events, only the net result.  In other words we can’t discern whether downtime was the result of a single event or the cumulative result of more frequent down time events over the course of the run.  Similarly, when reviewing Performance, we cannot accurately determine the actual cycle time or run rate, only the net result.

As shown in the accompanying graphic, two data sets (represented by Red and Blue) having the same average can present very different distributions as depicted by the range of data, height of the curve (kurtosis), width or spread of the curve (skewness), and significantly different standard deviations.

Clearly, any conclusions regarding the process simply based on averages would be very misleading.  In this same context, it is also clear that we must exercise caution when attempting to compare or analyse OEE results without first considering a statistical analysis or representation of the raw process data itself.

The Missing Metrics

Fortunately, we can use statistical tools to analyse run-time performance to determine whether our process is capable of consistently producing parts just as Quality Assurance personnel use statistical analysis tools to determine whether a process is capable of producing parts consistently.

One of the greatest opportunities for improving OEE is to use statistical tools to identify opportunities to reduce throughput variance during the production run.

Run-Time or throughput variance is OEE’s silent partner as it is an often overlooked aspect of production data analysis.  Striving to achieve consistent part to part cycle times and consistent hour to hour throughput rates is the most fundamental strategy to successfully improve OEE.  You will note that increasing throughput requires a focus on the same factors as OEE: Availability, Performance, and Quality.  In essence, efforts to improve throughput will yield corresponding improvements in OEE.

Simple throughput variance can readily be measured using Planned versus Actual Quantities produced – either over fixed periods of time and is preferred or cumulatively.  Some of the benefits of using quantity based measurement are as follows:

  1. Everyone on the shop floor understands quantity or units produced,
  2. This information is usually readily available at the work station,
  3. Everyone can understand or appreciate it’s value in tangible terms,
  4. Quantity measurements are less prone to error, and
  5. Quantities can be verified (Inventory) after the fact.

For the sake of simplicity, consider measuring hourly process throughput and calculating the average, range, and standard deviation of this hourly data.  With reference to the graphic above, even this fundamental data can provide a much more comprehensive and improved perspective of process stability or capability than would otherwise be afforded by a simple OEE index.

Using this data, our objective is to identify those times where the greatest throughput changes occurred and to determine what improvements or changes can be implemented to achieve consistent throughput.  We can then focus our efforts on improvements to achieve a more predictable and stable process, in turn improving our capability.

In OEE terms, we are focusing our efforts to eliminate or reduce variation in throughput by improving:

  1. Availability by eliminating or minimizing equipment downtime,
  2. Performance through consistent cycle to cycle task execution, and
  3. Quality by eliminating the potential for defects at the source.

Measuring Capability

To make sure we’re on the same page, let’s take a look at the basic formulas that may be used to calculate Process Capability.  In the automotive industry, suppliers may be required to demonstrate process capability for certain customer designated product characteristics or features.  When analyzing this data, two sets of capability formulas are commonly used:

  1. Preliminary (Pp) or Long Term (Cp) Capability:  Determines whether the product can be produced within the required tolerance range,
    • Pp or Cp = (Upper Specification Limit – Lower Specification Limit) / (6 x Standard Deviation)
  2. Preliminary (Ppk) or Long Term (Cpk) Capability:  Determines whether product can be produced at the target dimension and within the required tolerance range:
    • Capability = Minimum of Either:
      • Capability Upper = (Average + Upper Specification Limit) / (3 x Standard Deviation)
      • Capability Lower = (Lower Specification Limit – Average) / 3 x Standard Deviation)

When Pp = Ppk or Cp = Cpk, we can conclude that the process is centered on the target or nominal dimension.  Typically, the minimum acceptable Capability Index (Cpk) is 1.67 and implies that the process is capable of producing parts that conform to customer requirements.

In our case we are measuring quantities or throughput data, not physical part dimensions, so we can calculate the standard deviation of the collected data to determine our own “natural” limits (6 x Standard Deviation). Regardless of how we choose to present the data, our primary concern is to improve or reduce the standard deviation over time and from run to run.

Once we have a statistical model of our process, control charts can be created that in turn are used to monitor future production runs.  This provides the shop floor with a visual base line using historical data (average / limits) on which improvement targets can be made and measured in real-time.

Run-Time Variance Review

I recall using this strategy to achieve literally monumental gains – a three shift operation with considerable instability became an extremely capable and stable two shift production operation coupled with a one shift preventive maintenance / change over team.  Month over month improvements were noted by significantly improved capability data (substantially reduced Standard Deviation) and marked increases in OEE.

Process run-time charts with statistical controls were implemented for quantities produced just as the Quality department maintains SPC charts on the floor for product data.  The shop floor personnel understood the relationship between quantity of good parts produced and how this would ultimately affect the department OEE as well.

Monitoring quantities of good parts produced over shorter fixed time intervals is more effective than a cumulative counter that tracks performance over the course of the shift.  In this specific case, the quantity was “reset” for each hour of production essentially creating hourly in lieu of shift targets or goals.

Recording / plotting production quantities at fixed time intervals combined with notes to document specific process events creates a running production story board that can be used to identify patterns and other process anomalies that would otherwise be obscured.

Conclusion

I am hopeful that this post has heightened your awareness regarding the data that is represented by our chosen metrics.  In the boardroom, metrics are often viewed as absolute values coupled with a definitive sense of sterility.

Run-Time Variance also introduces a new perspective when attempting to compare OEE between shifts, departments, and factories.  From the context of this post, having OEE indices of the same value does not imply equality.  As we can see, metrics are not pure and perhaps even less so when managed in isolation.

Variance is indeed OEE’s Silent Partner but left unattended, Variance is also OEE’s Silent Killer.

Until Next Time – STAY lean!

Vergence Analytics

Twitter:  @Versalytics

OEE and Human Effort

A girl riveting machine operator at the Dougla...
Image by The Library of Congress via Flickr

I was recently asked to consider a modification to the OEE formula to calculate labour versus equipment effectiveness.  This request stemmed from the observation that some processes, like assembly or packing operations, may be completely dependent on human effort.  In other words, the people performing the work ARE the machine.

I have observed situations where an extra person was stationed at a process to assist with loading and packing of parts so the primary operator could focus on assembly alone.  In contrast, I have also observed processes running with fewer operators than required by the standard due to absenteeism.

In other situations, personnel have been assigned to perform additional rework or sorting operations to keep the primary process running.  It is also common for someone to be assigned to a machine temporarily while another machine is down for repairs.  In these instances, the ideal number of operators required to run the process may not always be available.

Although the OEE Performance factor may reflect the changes in throughput, the OEE formula does not offer the ability to discern the effect of labour.  It may be easy to recognize where people have been added to an operation because performance exceeds 100%.  But what happens when fewer people have been assigned to an operation or when processes have been altered to accommodate additional tasks that are not reflected in the standard?

Based on our discussion above, it seems reasonable to consider a formula that is based on Labour Effort.  Of the OEE factors that help us to identify where variances to standard exist, the number of direct labour employees should be one of them. At a minimum, a new cycle time should be established based on the number of people present.

OEE versus Financial Measurement

Standard Cost Systems are driven by a defined method or process and rate for producing a given product. Variances in labour, material, and / or process will also become variances to the standard cost and reflected as such in the financial statements. For this reason, OEE data must reflect the “real” state of the process.

If labour is added (over standard) to an operation to increase throughput, the process has changed. Unless the standard is revised, OEE results will be reportedly higher while the costs associated with production may only reflect a minimal variance because they are based on the standard cost. We have now lost our ability to correlate OEE data with some of our key financial performance indicators.

Until Next Time – STAY lean!

Vergence Analytics

OEE, Labour, and Inventory

Almost every manufacturing facility has a method or means to measure labour efficiency.  Some of these methods may include Earned versus Actual hours or perhaps they are financially driven metrics such as “Labour as a Percent of Sales” or as “Labour Variance to Plan”.  As we have learned all too well through the latest economic downturn, organizations are quite adept at using these metrics to flex direct labour levels based on current demand.  This suggests that almost every company has access to at least a  financial model of some form that can be used to represent “ideal” work force requirements based on sales.

It is not our intent to discuss how these models are created, however, I can only trust that the financial model is based on a realistic assessment of current process capabilities and resources required to support the product mix represented by the sales forecast.  At a minimum, the assessment should include the following standards and known variances for each process:  Material, Labour, and Rate.  You may recognize these standards as they form the basis of our OEE cost model that we have discussed in detail and offer in our Free downloads page.

Analyzing the Data

Many companies use both Labour Efficiency and Overall Equipment Effectiveness to measure the performance of their manufacturing operations.  We would also expect a strong correlation to exist between these two metrics as the basis for their measurement is fundamentally common.  As you may have already observed in your own operations, this is not always the case in the real world.  The disconnect between these two metrics is a strong indicator that yet another opportunity for improvement may exist.

For example, it is not uncommon to see operations where OEE is 60% – 70% while labour efficiencies are reported to be 95% or better.  How is this possible?  The simple answer is that labour is redirected to perform other work while a machine is down or, in extreme cases, the work force is sent home.  In both cases, OEE continues to suffer while labour is managed to minimize the immediate financial impact of the downtime.

Set up and / or change over may be one of the reasons for down time and another reason why there is a perceived discrepancy between labour efficiency and overall equipment effectiveness.  Some companies employee personnnel specifically trained to perform these tasks and are classified as indirect labour.

Redirecting labour to operate other machines presents its own unique set of problems and is typically frowned upon in lean organizations.  Companies that follow this practice must ensure that adequate controls are in place to prevent excess inventories from building over time.  I reluctantly concede to the practice of “redeployment during downtime” if it is indeed being managed.

Some would argue that the alternate work is being managed because the schedule actually includes a backup job if a given machine goes down.  If we probe deep enough, we may be surprised to learn that some of these backup jobs are actually “never” scheduled because the primary scheduled machines “always” provide ample downtime to finish orders of “unscheduled” backup work.  As such, we must be fully aware of the potential to create the “hidden factory” that runs when the real one isn’t.

Pitfalls of Redirected Labour

This practice easily becomes a learned behavior and tends to place more emphasis on preserving labour efficiency than actually increasing the sense of urgency required to solve the real problem.  In all too many cases the real problem is never solved.

Too many opportunities to improve operations are missed because many planners have learned to compensate for processes that continually fail to perform.  Experience shows that production schedules evolve over time to include backup jobs and alternate machines that ultimately serve as a mask to keep real problems from surfacing.  From a labour and OEE perspective, everything appears to be normal.

Redirecting labour to compensate for Process deficiencies may give rise to excess inventory.  “Increased inventory” is an extremely high price to pay for the sake of perceived efficiency in the short-term.  Higher inventory has an immediate negative impact to cash flow in the short-term as real money is now represented by parts in inventory until consumed or sold.  Additional penalties of inventory include carrying and handling costs that are also worthy of consideration.

Three Metrics – Working Together

You will note that we deliberately used the term labour efficiency throughout our discussion and presents an opportunity to demonstrate that efficiency and effectiveness are not synonymous.  Efficiency measures our ability to produce parts at rate while effectiveness measures our ability to produce the right quantity of quality parts at the right time.

Overall Equipment Effectiveness, Labour Efficiency, and Inventory are truly complementary metrics that can be used to determine how effectively we are managing our resources:  Human, Equipment, Material, and Time.  Our mission is to safely produce a quality part at rate, delivered on time and in full, at the lowest possible cost.  Analyzing the data derived through our metrics is the key to understanding where opportunities persist.  Once identified, we can effectively solve the problems and implement corrective actions accordingly.

Until Next Time – STAY lean!

Vergence Analytics