Category: Lean Metrics

Lean – A Race Against Time

The printer Benjamin Franklin contributed grea...
Image via Wikipedia

Background

If “Time is Money”, is it reasonable for us to consider that “Wasting Time is Wasting Money?”

Whether we are discussing customer service, health care, government services, or manufacturing – waste is often identified as one of the top concerns that must be addressed and ultimately eliminated.  As is often the case in most organizations, the next step is an attempt to define waste.  Although they are not the focus of our discussion, the commonly known “wastes” from a lean perspective are:

  • Over-Production
  • Inventory
  • Correction (Non-Conformance  – Quality)
  • Transportation
  • Motion
  • Over Processing
  • Waiting

Resourcefulness is another form of waste often added to this list and occurs when resources and talent are not utilized to work at their full potential.

Where did the Time go?

As a lean practitioner, I acknowledge these wastes exist but there must have been an underlying element of concern or thinking process that caused this list to be created.  In other words, lists don’t just appear, they are created for a reason.

As I pondered this list, I realized that the greatest single common denominator of each waste is TIME.  Again, from a lean perspective, TIME is the basis for measuring throughput.  As such, our Lean Journey is ultimately founded on our ability to reduce or eliminate the TIME required to produce a part or deliver a service.

As a non-renewable resource, we must learn to value time and use it effectively.  Again, as we review the list above, we can see that lost time is an inherent trait of each waste.  We can also see how this list extends beyond the realm of manufacturing.  TIME is a constant constraint that is indeed a challenge to manage even in our personal lives.

To efficiently do what is not required is NOT effective.

I consider Overall Equipment Effectiveness (OEE) to be a key metric in manufacturing.  While it is possible to consider the three factors Availability, Performance, and Quality separately, in the context of this discussion, we can see that any impediment to throughput can be directly correlated to lost time.

To extend the concept in a more general sense, our objective is to provide our customers with a quality product or service in the shortest amount of time.  Waste is any impediment or roadblock that prevents us from achieving this objective.

Indirect Waste and Effectiveness

Indirect Waste (time) is best explained by way of example.  How many times have we heard, “I don’t understand this – we just finished training everybody!”  It is common for companies to provide training to teach new skills.  Similarly, when a problem occurs, one of the – too often used – corrective actions is “re-trained employee(s).”  Unfortunately, the results are not always what we expect.

Many companies seem content to use class test scores and instructor feedback to determine whether the training was effective while little consideration is given to developing skill competency.  If an employee cannot execute or demonstrate the skill successfully or competently, how effective was the training?  Recognizing that a learning curve may exist, some companies are inclined to dismiss incompetence but only for a limited time.

The company must discern between employee capability and quality of training.  In other words, the company must ensure that the quality of training provided will adequately prepare the employee to successfully perform the required tasks.  Either the training and / or method of delivery are not effective or the employee may simply lack the capability.  Let me qualify this last statement by saying that “playing the piano is not for everyone.”

Training effectiveness can only be measured by an employee’s demonstrated ability to apply their new knowledge or skill.

Time – Friend or Foe?

Lean tools are without doubt very useful and play a significant role in helping to carve out a lean strategy.  However, I am concerned that the tendency of many lean initiatives is to follow a prescribed strategy or formula.  This approach essentially creates a new box that in time will not be much different from the one we are trying to break out of.

An extension of this is the classification of wastes.  As identified here, the true waste is time.  Efforts to reduce or eliminate the time element from any process will undoubtedly result in cost savings.  However, the immediate focus of lean is not on cost reduction alone.

Global sourcing has assured that “TIME” can be purchased at reduced rates from low-cost labour countries.  While this practice may result in a “cost savings”, it does nothing to promote the cause of lean – we have simply outsourced our inefficiencies at reduced prices.  Numerous Canadian and US facilities continue to be closed as workers witness the exodus of jobs to foreign countries due to lower labor and operating costs. Electrolux closes facility in Webster City, Iowa.

I don’t know the origins of multi-tasking, but the very mention of it suggests that someone had “time on their hands.”  So remember, when you’re put on hold, driving to work, stuck in traffic, stopped at a light, sorting parts, waiting in line, sitting in the doctors office, watching commercials, or just looking for lost or misplaced items – your time is running out.

Is time a friend or foe?  I suggest the answer is both, as long as we spend it wisely (spelled effectively).  Be effective, be Lean, and stop wasting time.

Let the race begin:  Ready … Set … Go …

Until Next Time – STAY lean!

Vergence Analytics

Twitter:  @Versalytics
Advertisements

Scorecards and Dashboards

Interior of the 2008 Cadillac CTS (US model sh...
Image via Wikipedia

I recently published, Urgent -> The Cost of Things Gone Wrong, where I expressed concern for dashboards that are attempting to do too much.  In this regard, they become more of a distraction instead of serving the intended purpose of helping you manage your business or processes.  To be fair, there are at least two (2) levels of data management that are perhaps best differentiated by where and how they are used:  Scorecards and Dashboards.

I prefer to think of Dashboards as working with Dynamic Data.  Data that changes in real-time and influences our behaviors similar to the way the dashboard in our cars work to communicate with us as we are driving.  The fuel gauge, odometer, two trip meters, tachometer, speedometer, digital fuel consumption (L/100 km), and km remaining are just a few examples of the instrumentation available to me in my Mazda 3.

While I appreciate the extra instrumentation, the two that matter first and foremost are the speedometer and the tachometer (since I have a 5 speed manual transmission).  The other bells and whistles do serve a purpose but they don’t necessarily cause me to change my driving behavior.  Of note here is that all of the gauges are dynamic – reporting data in real time – while I’m driving.

A Scorecard on the other hand is a periodic view of summary data and from our example may include Average Fuel Consumption, Average Speed, Maximum Speed, Average Trip, Maximum Trip, Total Miles Traveled and so on.  The scorecard may also include other items such as driving record / vehicle performance data such as Parking Tickets, Speeding Tickets, Oil Changes, Flat Tires, Emergency and Preventive Maintenance.

One of my twitter connections, Bob Champagne (@BobChampagne), published an article titled, Dashboards Versus Scorecards- Its all about the decisions it facilitates…, that provides some great insights into Scorecards and Dashboards.  This article doesn’t require any further embellishment on my part so I encourage you to click here or paste the following link into your browser:  http://wp.me/p1j0mz-6o.  I trust you will find the article both informative and engaging.

Next Steps:

Take some time to review your current metrics.  What metrics are truly influencing your behaviors and actions?  How are you using your metrics to manage your business?  Are you reacting to trends or setting them?

It’s been said that, “What gets measured gets managed.”  I would add – “to a point.”  It simply isn’t practical or even feasible to measure everything.  I say, “Measure to manage what matters most”.

Remember to get your free Excel Templates for OEE by visiting our downloads page or the orange widget in the sidebar.  You can follow us on twitter as well @Versalytics.

Until Next Time – STAY lean!

Vergence Analytics

Variance – OEE’s Silent Partner (Killer)

Example of two sample populations with the sam...
Image via Wikipedia

I was recently involved in a discussion regarding the value of Overall Equipment Effectiveness (OEE).  Of course, I fully supported OEE and confirmed that it can bring tremendous value to any organization that is prepared to embrace it as a key metric.  I also qualified my response by stating that OEE cannot be managed in isolation:

OEE and it’s intrinsic factors, Availability, Performance, and Quality are summary level indices and do not measure or provide any indication of process stability or capability

As a top level metric, OEE does not describe or provide a sense of actual run-time performance.  For example, when reviewing Availability, we have no sense of duration or frequency of down time events, only the net result.  In other words we can’t discern whether downtime was the result of a single event or the cumulative result of more frequent down time events over the course of the run.  Similarly, when reviewing Performance, we cannot accurately determine the actual cycle time or run rate, only the net result.

As shown in the accompanying graphic, two data sets (represented by Red and Blue) having the same average can present very different distributions as depicted by the range of data, height of the curve (kurtosis), width or spread of the curve (skewness), and significantly different standard deviations.

Clearly, any conclusions regarding the process simply based on averages would be very misleading.  In this same context, it is also clear that we must exercise caution when attempting to compare or analyse OEE results without first considering a statistical analysis or representation of the raw process data itself.

The Missing Metrics

Fortunately, we can use statistical tools to analyse run-time performance to determine whether our process is capable of consistently producing parts just as Quality Assurance personnel use statistical analysis tools to determine whether a process is capable of producing parts consistently.

One of the greatest opportunities for improving OEE is to use statistical tools to identify opportunities to reduce throughput variance during the production run.

Run-Time or throughput variance is OEE’s silent partner as it is an often overlooked aspect of production data analysis.  Striving to achieve consistent part to part cycle times and consistent hour to hour throughput rates is the most fundamental strategy to successfully improve OEE.  You will note that increasing throughput requires a focus on the same factors as OEE: Availability, Performance, and Quality.  In essence, efforts to improve throughput will yield corresponding improvements in OEE.

Simple throughput variance can readily be measured using Planned versus Actual Quantities produced – either over fixed periods of time and is preferred or cumulatively.  Some of the benefits of using quantity based measurement are as follows:

  1. Everyone on the shop floor understands quantity or units produced,
  2. This information is usually readily available at the work station,
  3. Everyone can understand or appreciate it’s value in tangible terms,
  4. Quantity measurements are less prone to error, and
  5. Quantities can be verified (Inventory) after the fact.

For the sake of simplicity, consider measuring hourly process throughput and calculating the average, range, and standard deviation of this hourly data.  With reference to the graphic above, even this fundamental data can provide a much more comprehensive and improved perspective of process stability or capability than would otherwise be afforded by a simple OEE index.

Using this data, our objective is to identify those times where the greatest throughput changes occurred and to determine what improvements or changes can be implemented to achieve consistent throughput.  We can then focus our efforts on improvements to achieve a more predictable and stable process, in turn improving our capability.

In OEE terms, we are focusing our efforts to eliminate or reduce variation in throughput by improving:

  1. Availability by eliminating or minimizing equipment downtime,
  2. Performance through consistent cycle to cycle task execution, and
  3. Quality by eliminating the potential for defects at the source.

Measuring Capability

To make sure we’re on the same page, let’s take a look at the basic formulas that may be used to calculate Process Capability.  In the automotive industry, suppliers may be required to demonstrate process capability for certain customer designated product characteristics or features.  When analyzing this data, two sets of capability formulas are commonly used:

  1. Preliminary (Pp) or Long Term (Cp) Capability:  Determines whether the product can be produced within the required tolerance range,
    • Pp or Cp = (Upper Specification Limit – Lower Specification Limit) / (6 x Standard Deviation)
  2. Preliminary (Ppk) or Long Term (Cpk) Capability:  Determines whether product can be produced at the target dimension and within the required tolerance range:
    • Capability = Minimum of Either:
      • Capability Upper = (Average + Upper Specification Limit) / (3 x Standard Deviation)
      • Capability Lower = (Lower Specification Limit – Average) / 3 x Standard Deviation)

When Pp = Ppk or Cp = Cpk, we can conclude that the process is centered on the target or nominal dimension.  Typically, the minimum acceptable Capability Index (Cpk) is 1.67 and implies that the process is capable of producing parts that conform to customer requirements.

In our case we are measuring quantities or throughput data, not physical part dimensions, so we can calculate the standard deviation of the collected data to determine our own “natural” limits (6 x Standard Deviation). Regardless of how we choose to present the data, our primary concern is to improve or reduce the standard deviation over time and from run to run.

Once we have a statistical model of our process, control charts can be created that in turn are used to monitor future production runs.  This provides the shop floor with a visual base line using historical data (average / limits) on which improvement targets can be made and measured in real-time.

Run-Time Variance Review

I recall using this strategy to achieve literally monumental gains – a three shift operation with considerable instability became an extremely capable and stable two shift production operation coupled with a one shift preventive maintenance / change over team.  Month over month improvements were noted by significantly improved capability data (substantially reduced Standard Deviation) and marked increases in OEE.

Process run-time charts with statistical controls were implemented for quantities produced just as the Quality department maintains SPC charts on the floor for product data.  The shop floor personnel understood the relationship between quantity of good parts produced and how this would ultimately affect the department OEE as well.

Monitoring quantities of good parts produced over shorter fixed time intervals is more effective than a cumulative counter that tracks performance over the course of the shift.  In this specific case, the quantity was “reset” for each hour of production essentially creating hourly in lieu of shift targets or goals.

Recording / plotting production quantities at fixed time intervals combined with notes to document specific process events creates a running production story board that can be used to identify patterns and other process anomalies that would otherwise be obscured.

Conclusion

I am hopeful that this post has heightened your awareness regarding the data that is represented by our chosen metrics.  In the boardroom, metrics are often viewed as absolute values coupled with a definitive sense of sterility.

Run-Time Variance also introduces a new perspective when attempting to compare OEE between shifts, departments, and factories.  From the context of this post, having OEE indices of the same value does not imply equality.  As we can see, metrics are not pure and perhaps even less so when managed in isolation.

Variance is indeed OEE’s Silent Partner but left unattended, Variance is also OEE’s Silent Killer.

Until Next Time – STAY lean!

Vergence Analytics

Twitter:  @Versalytics

Achieve Sustainability Through Integration

Innovation
Image via Wikipedia

It’s no secret that lean is much more than a set of tools and best practices designed to eliminate waste and reduce variance in our operations.  I contend that lean is defined by a culture that embraces the principles on which lean is founded.  An engaged lean culture is evidenced by the continuing development and integration of improved systems, methods, technologies, best practices, and better practices.  When the principles of lean are clearly understood, the strategy and creative solutions that are deployed become a signature trait of the company itself.

Unfortunately, to offset the effects of the recession, many lean initiatives have either diminished or disappeared as companies downsized and restructured to reduce costs.  People who once entered data, prepared reports, or updated charts could no longer be supported and their positions were eliminated.  Eventually, other initiatives also lost momentum as further staffing cuts were made.  In my opinion, companies that adopted this approach simply attempted to implement lean by surrounding existing systems with lean tools.

Some companies have simply returned to a “back to basics” strategy that embraces the most fundamental principles of lean.  Is it enough to be driven by a mission, a few metrics, and simple policy statements or slogans such as “Zero Downtime”, “Zero Defects”, and “Eliminate Waste?”  How do we measure our ability to safely produce a quality part at rate, delivered on time and in full, at the lowest possible cost?  Regardless of what we measure internally, our stakeholders are only concerned with two simple metrics – Profit and Return on Investment.  The cold hard fact is that banks and investors really don’t care what tools you use to get the job done.  From their perspective the best thing you can do is make them money!  I agree that we are in business to make money.

What does it mean to be lean?  I ask this question on the premise that, in many cases, sustainability appears to be dependent on the resources that are available to support lean versus those who are actually running the process itself.  As such, “sustainability” is becoming a much greater concern today than perhaps most of us are likely willing to admit.  I have always encouraged companies to implement systems where events, data, and key metrics are managed in real-time at the source such that the data, events, and metrics form an integral part of the whole process.

Processing data for weekly or monthly reports may be necessary, however, they are only meaningful if they are an extension of ongoing efforts at shop floor / process level itself.  To do otherwise is simply pretending to be lean.  It is imperative that data being recorded, the metrics being measured, and the corrective actions are meaningful, effective, and influence our actions and behaviors.

To illustrate the difference between Culture and Tools consider this final thought:  A carpenter is still a carpenter with or without hammer and nails.

Until Next Time – STAY lean!

Vergence Analytics

Twitter:  @Versalytics

Lean Paralysis

Lean – Breaking Through Paralysis

Significant initiatives, including lean, can reach a level of stagnation that eventually cause the project to either lose focus or disappear altogether.  Hundreds of books have already been written that reinforce the concept that the company culture will ultimately determine the success or failure of any initiative.  A sustainable culture of innovation, entrepreneurial spirit, and continual improvement requires effective leadership to cultivate and develop an environment that supports these attributes.

When launching any new initiative, we tend to focus on the many positive aspects that will result.  Failure is seldom placed on the list of possible outputs for a new initiative.  We are all quite familiar with the typical Pro’s and Con’s, advantages versus disadvantages, and other comparative analysis techniques such as SWAT > Strengths, Weakness, Alternatives, Threats)

A well defined initiative should address both the benefits of implementation AND the risks to the operation if it is NOT.

Back on Track

The Vision statement is one starting point to re-energize the team.  Of course, this assumes that the team actually understands and truly embraces the vision.

Overcoming Road Blocks

The Charter: Challenge the team to create and sign up to a charter that clearly defines the scope and expectations of the project.  The team should have clearly defined goals followed by an effective implementation / integration plan.  The charter should not only describe the “Achievements” but also the consequences of failure.  Be clear with the expectations:  Annual Savings of $xxx,xxx by Eliminating “Task A – B – C”, Reducing Inventory by “xx” days, and by  reducing lead times by “xx” days.

Defining Consequences:  Competitive pricing compromised and will lead to loss of business.  This could be rephrased using the model expression:  We must do “THIS” or else “THIS”.  It has been said that the pain of change must be less than the pain of remaining the same.  If not, the program will surely fail.

The Plan: An effective implementation strategy requires a time line that includes reporting gates, key milestones, and the actual events or activities required.  The time line should be such that momentum is sustained.  If progress suggests that the program is ahead of schedule, revise timings for subsequent events where possible.  Extended “voids” or lags in event timing can reduce momentum and cause the team to disengage.

Focus: Often times, we are presented with multiple options to achieve the desired results.  An effective decision making process is required to reduce choices or to create a hybrid solution that encompasses several options.  The decision process must result in a single final solution.

Consequences: As mentioned earlier, a list of consequences should become part of the Charter process as well.  Failure suggests that a desired expectation will not be realized.  It is not enough to simply return to “the way it was”.  The indirect implication is that every failure becomes a learning experience for the next attempt.  In other words, we learn from our failures and stay committed to the course of the charter.

Example:

Almost all software programs are challenged to sort data.  We don’t really think about the “method” that is used.  We just wait for the program to do it’s task and wait for the results to appear.  At some time, the software development team must have chosen a certain method, also known as an algorithm, to sort the data.

We were recently challenged in a similar situation to decide which sort method would be best suited for the application.  You may be surprised to learn that there are many different sorting algorithms available such as:

  1. Bubble Sort
  2. Quick Sort
  3. Heap Sort
  4. Comb Sort
  5. Insertion Sort
  6. Merge Sort
  7. Shaker Sort
  8. Flash Sort
  9. Postman Sort
  10. Radix Sort
  11. Shell Sort

This is certainly quite a selection and more methods are certain to exist.  Each method has it’s advantages and disadvantages.  Some sorting methods require more computer memory, some are stable, others are not.  Our goal was to create a sorted list without duplicates.  We considered adding elements and maintaining a sorted “duplicate free” list in real-time.  We also considered reading all the data first and sorting the data after the fact.

The point is that of the many available options, one solution will eventually be adopted by the team.  Using the “wrong” sorting method could result in extremely slow performance and frustrated users.  In this case the users of the system may abandon a solution that they themselves are not a part of creating.  While a buble sort may produce the intended result, it is usually not the most efficient.

Another aspect of effective development is to document the analysis process that was used to arrive at the final solution.  In our example, we could run comparative timing and computer resource requirements to determine which solution is most suitable to the application.  Some algorithms work better on “nearly sorted” lists versus others that work better with “randomly ordered” data.

Engage the Team: The team should be represented by multiple disciplines or departments within the organization.  Using the simple example from above, the development team may create a working solution that is later abandoned by the ultimate users of the system due to it’s poor performance.  The charter should be very clear on the desired expectations and performance criteria of the final solution.

Creating a model or prototype to represent the solution is common place.  This minimizes the time and resources expended before arriving at the final  solution for implemention.

Vision: Leadership must continue to focus beyond the current steps.  A project or program is not the means to an end.  Rather it should be viewed as the foundation for the next step of the journey.  Lean, like any other initiative, is an evolutionary process.  Lean is not defined by a series of prescriptions and formulas.  The pursuit and elimination of waste is a mission that can be achieved in many different ways.

Management / Review

Regular management reviews should be part of the overall strategy to monitor progress and more so to determine whether there are any impediments to a successful outcome.  The role of leadership is to provide direction to eliminate or resolve the road blocks and to keep the team on track.

Breaking Through Paralysis

The objective is clear – we need to keep the initiative moving and also learn to identify when and why the initiative may have stopped.  Running a business is more than just having good intentions.  We must be prudent in our execution to efficiently and effectively achieve the desired results.

Until Next Time – STAY Lean!

 

Business Novels: The Next Best Thing To Reality

I enjoy reading business novels and fondly remember my first read of “The Goal” by Eliyahu Goldratt.  I also had the pleasure of reading another book titled “Velocity:  Combining Lean, Six Sigma, and the Theory of Constraints to Achieve Breakthrough Performance – A Business Novel” by Dee Jacob, Suzan Bergland, and Jeff Cox.  This book is a natural extension of “The Goal” and offers a very realistic perspective of what it takes to turn a company around.

Rife with the typical political rhetoric that accompanies any change process, you will find a truly intriguing story that discusses how to overcome these challenges and what it can mean to set aside personal agendas and theories for the greater good of the company.  Velocity also demonstrates how prescriptive strategies can become an impediment to finding new solutions to solve the problem at hand.

Business novels provide a unique self-paced learning opportunity by teaching new concepts that otherwise may be difficult to explain or appreciate in a formal classroom setting.  The story line helps to deepen our understanding and expectations of the concepts all the while improving  our ability to retain the information.

Velocity is a great read and, like The Goal, should be mandatory reading for every one involved in manufacturing.

Enjoy and thank you for your continued support!

Until Next Time – STAY Lean!

Vergence Analytics

Urgent -> The Cost of Things Gone Wrong!

The three levels of evaluation and example met...
Image via Wikipedia

Most of you will know that this is the first time I have published two posts on the same day.  I was compelled to write this post after receiving yet another “dashboard suite” offer.

To be clear, I fully support and recommend the use of dashboards.  They are excellent tools to help us manage our business.  I am reminded however that we need to keep our metrics in perspective.

As their popularity continues to increase, so will the number of features, tools, and metrics that they support.  As I mentioned earlier, I just received a promotion for yet another dashboard software that is capable of measuring an overwhelming number of metrics.

As I reviewed the long list of metrics a common theme was developing from the list of words used to describe them:

  • Missed
  • Costs
  • Shortage
  • Stock Out
  • Defective
  • Nonconforming
  • Errors
  • Inspection
  • Mistake
  • Sort
  • Recall
  • Warranty
  • Returns
  • Re-Engineer
  • Rework
  • Repair
  • Retest
  • Scrap
  • Wrong
  • Waste
  • Downtime
  • Incidents

The common theme?  As I continued to review the “benefits” of measuring these metrics, I could only wonder why so many things were going wrong and the one super metric that captures all of them was missing:  “The Cost of Things Gone Wrong”.

We should also be cognizant of other hidden costs such as data collection, system maintenance, and meetings.  I am always concerned where metrics are not supported by actionable improvements or demonstrate sustainable and positive change over time.

As a Technologist, I am humbly reminded that the design of any system, process, product, or service precedes any measurement of its performance.  Perhaps the long list of metrics is also indicative of our ability to assess potential failures and improve our designs prior to release.

Another observation was the absence of Key Performance Predictors.  Using the term “speculation” to predict future performance is a good sign that Key Performance Predictors are not adequately understood, defined, or used in your organization.  This is a topic for another post.

Although dashboards purport to help us manage our business, I suggest that we take the time to reflect and understand what it really means to “do it right the first time.”  Barring that, let’s get it right the next time.

Until Next Time – STAY Lean!

Vergence Analytics

OEE and Human Effort

A girl riveting machine operator at the Dougla...
Image by The Library of Congress via Flickr

I was recently asked to consider a modification to the OEE formula to calculate labour versus equipment effectiveness.  This request stemmed from the observation that some processes, like assembly or packing operations, may be completely dependent on human effort.  In other words, the people performing the work ARE the machine.

I have observed situations where an extra person was stationed at a process to assist with loading and packing of parts so the primary operator could focus on assembly alone.  In contrast, I have also observed processes running with fewer operators than required by the standard due to absenteeism.

In other situations, personnel have been assigned to perform additional rework or sorting operations to keep the primary process running.  It is also common for someone to be assigned to a machine temporarily while another machine is down for repairs.  In these instances, the ideal number of operators required to run the process may not always be available.

Although the OEE Performance factor may reflect the changes in throughput, the OEE formula does not offer the ability to discern the effect of labour.  It may be easy to recognize where people have been added to an operation because performance exceeds 100%.  But what happens when fewer people have been assigned to an operation or when processes have been altered to accommodate additional tasks that are not reflected in the standard?

Based on our discussion above, it seems reasonable to consider a formula that is based on Labour Effort.  Of the OEE factors that help us to identify where variances to standard exist, the number of direct labour employees should be one of them. At a minimum, a new cycle time should be established based on the number of people present.

OEE versus Financial Measurement

Standard Cost Systems are driven by a defined method or process and rate for producing a given product. Variances in labour, material, and / or process will also become variances to the standard cost and reflected as such in the financial statements. For this reason, OEE data must reflect the “real” state of the process.

If labour is added (over standard) to an operation to increase throughput, the process has changed. Unless the standard is revised, OEE results will be reportedly higher while the costs associated with production may only reflect a minimal variance because they are based on the standard cost. We have now lost our ability to correlate OEE data with some of our key financial performance indicators.

Until Next Time – STAY lean!

Vergence Analytics

OEE, Labour, and Inventory

Almost every manufacturing facility has a method or means to measure labour efficiency.  Some of these methods may include Earned versus Actual hours or perhaps they are financially driven metrics such as “Labour as a Percent of Sales” or as “Labour Variance to Plan”.  As we have learned all too well through the latest economic downturn, organizations are quite adept at using these metrics to flex direct labour levels based on current demand.  This suggests that almost every company has access to at least a  financial model of some form that can be used to represent “ideal” work force requirements based on sales.

It is not our intent to discuss how these models are created, however, I can only trust that the financial model is based on a realistic assessment of current process capabilities and resources required to support the product mix represented by the sales forecast.  At a minimum, the assessment should include the following standards and known variances for each process:  Material, Labour, and Rate.  You may recognize these standards as they form the basis of our OEE cost model that we have discussed in detail and offer in our Free downloads page.

Analyzing the Data

Many companies use both Labour Efficiency and Overall Equipment Effectiveness to measure the performance of their manufacturing operations.  We would also expect a strong correlation to exist between these two metrics as the basis for their measurement is fundamentally common.  As you may have already observed in your own operations, this is not always the case in the real world.  The disconnect between these two metrics is a strong indicator that yet another opportunity for improvement may exist.

For example, it is not uncommon to see operations where OEE is 60% – 70% while labour efficiencies are reported to be 95% or better.  How is this possible?  The simple answer is that labour is redirected to perform other work while a machine is down or, in extreme cases, the work force is sent home.  In both cases, OEE continues to suffer while labour is managed to minimize the immediate financial impact of the downtime.

Set up and / or change over may be one of the reasons for down time and another reason why there is a perceived discrepancy between labour efficiency and overall equipment effectiveness.  Some companies employee personnnel specifically trained to perform these tasks and are classified as indirect labour.

Redirecting labour to operate other machines presents its own unique set of problems and is typically frowned upon in lean organizations.  Companies that follow this practice must ensure that adequate controls are in place to prevent excess inventories from building over time.  I reluctantly concede to the practice of “redeployment during downtime” if it is indeed being managed.

Some would argue that the alternate work is being managed because the schedule actually includes a backup job if a given machine goes down.  If we probe deep enough, we may be surprised to learn that some of these backup jobs are actually “never” scheduled because the primary scheduled machines “always” provide ample downtime to finish orders of “unscheduled” backup work.  As such, we must be fully aware of the potential to create the “hidden factory” that runs when the real one isn’t.

Pitfalls of Redirected Labour

This practice easily becomes a learned behavior and tends to place more emphasis on preserving labour efficiency than actually increasing the sense of urgency required to solve the real problem.  In all too many cases the real problem is never solved.

Too many opportunities to improve operations are missed because many planners have learned to compensate for processes that continually fail to perform.  Experience shows that production schedules evolve over time to include backup jobs and alternate machines that ultimately serve as a mask to keep real problems from surfacing.  From a labour and OEE perspective, everything appears to be normal.

Redirecting labour to compensate for Process deficiencies may give rise to excess inventory.  “Increased inventory” is an extremely high price to pay for the sake of perceived efficiency in the short-term.  Higher inventory has an immediate negative impact to cash flow in the short-term as real money is now represented by parts in inventory until consumed or sold.  Additional penalties of inventory include carrying and handling costs that are also worthy of consideration.

Three Metrics – Working Together

You will note that we deliberately used the term labour efficiency throughout our discussion and presents an opportunity to demonstrate that efficiency and effectiveness are not synonymous.  Efficiency measures our ability to produce parts at rate while effectiveness measures our ability to produce the right quantity of quality parts at the right time.

Overall Equipment Effectiveness, Labour Efficiency, and Inventory are truly complementary metrics that can be used to determine how effectively we are managing our resources:  Human, Equipment, Material, and Time.  Our mission is to safely produce a quality part at rate, delivered on time and in full, at the lowest possible cost.  Analyzing the data derived through our metrics is the key to understanding where opportunities persist.  Once identified, we can effectively solve the problems and implement corrective actions accordingly.

Until Next Time – STAY lean!

Vergence Analytics

Discover Toyota’s Best Practice

The new headquarters of the Toyota Motor Corpo...
The new headquarters of the Toyota Motor Corporation, opened in February 2005 in Toyota City. (Photo credit: Wikipedia)

I have always been impressed by Toyota’s inherent ability to adapt, improve, and embrace change even during the harshest times.  This innate ability is a signature trait of Toyota’s culture and has been the topic of intense study and research for many years.

How is it that Toyota continues to thrive regardless of the circumstances they encounter?  While numerous authors and lean practitioners have studied Toyota’s systems and shared best practices, all too many have missed the underlying strategy behind Toyota’s ever evolving systems and processes.  As a result, we are usually provided with ready to use solutions, countermeasures, prescriptive procedures, and forms that are quickly adopted and added to our set of lean tools.

The true discovery occurs when we realize that these forms and procedures are the product or outcome of an underlying systemic thought process.  This is where the true learning and process transformations take place.  In many respects this is similar to an artist who produces a painting.  While we can enjoy the product of the artist’s talent, we can only wonder how the original painting appears in the artist’s mind.

Of the many books that have been published about Toyota, there is one book that has finally managed to capture and succinctly convey the strategy responsible for the culture that presently defines Toyota.  Written by Mike Rother, “Toyota Kata – Managing People For Improvement, Adaptiveness, and Superior Results” reveals the methodology used to develop people at all levels of the Toyota organization.

Surprisingly, the specific techniques described in the book are not new, however, the manner in which they are used does not necessarily follow conventional wisdom or industry practice.  Throughout the book, it becomes evidently clear that the current practices at Toyota are the product of a collection of improvements, each building on the results of previous steps taken toward a seemingly elusive target.

Although we have gleaned and adopted many of Toyota’s best practices into our own operations, we do not have the benefit of the lessons learned nor do we fully understand the circumstances that led to the creation of these practices as we know them today.  As such, we are only exposed to one step of possibly many more to follow that may yield yet another radical and significantly different solution.

In simpler terms, the solutions we observe in Toyota today are only a glimpse of the current level of learning.  In the spirit of the improvement kata, it stands to reason that everything is subject to change.  The one constant throughout the entire process is the improvement kata or routine that is continually practiced to yield even greater improvements and results.

If you or your company are looking for a practical, hands on, proven strategy to sustain and improve your current operations then this book, “Toyota Kata – Managing People For Improvement, Adaptiveness, and Superior Results“, is the one for you.  The improvement kata is only part of the equation.  The coaching kata is also discussed at length and reveals Toyota’s implementation and training methods to assure the whole company mindset is engaged with the process.

Why are we just learning of this practice now?  The answer is quite simple.  The method itself is practiced by every Toyota employee at such a frequency that it has become second nature to them and trained into the culture itself.  While the tools that are used to support the practice are known and widely used in industry, the system responsible for creating them has been obscure from view – until now.

You can preview the book by simply clicking on the links in our post.  Transforming the culture in your company begins by adding this book, “Toyota Kata – Managing People For Improvement, Adaptiveness, and Superior Results”, to your lean library.  I have been practicing the improvement and coaching kata for some time and the results are impressive.  The ability to engage and sustain all employees in the company is supported by the simplicity of the kata model itself. For those who are more ambitious, you may be interested in the Toyota Kata Training offered by the University of Michigan.

Learning and practicing the Toyota improvement kata is a strategy for company leadership to embrace.  To do otherwise is simply waiting to copy the competition.  I have yet to see a company vision statement where the ultimate goal is to be second best.

Until Next Time – STAY lean!

Vergence Analytics