Category: Problem Solving

PI: Discovery to Application – 1900 years

Archimedes pi
Image via Wikipedia

Mathematicians celebrate PI day on March 14 to honour of the number 3.14.  PI day was founded by Physicist Larry Shaw at the San Fransisco Exploratorium on March 14, 1989.

Greek mathematician Archimedes of Syracuse (287 to 212 B.C.) was the first to calculate the value of PI, however, its use became wide-spread only after it was adopted by Swiss mathematician Leonhard Euler in 1737.

Concept to Customer

Could you imagine the conversation in today’s terms.  “So, whose idea is this PI thing anyway?”  “Well there was this guy who lived just before the time of Christ and he …”

I find it interesting that 1,900 years had passed before PI became part of mainstream math.  Today “concept-to-customer” cycles are measured in much shorter terms:  days, weeks, months and at most a span of no more than a few years.

Now the never-ending value of PI appears as a simple button on every calculator and as  “one of many” functions known to virtually every computer software and / or operating system.

For those who are not aware, PI is used in various math, science, and engineering calculations and is most widely used to calculate the areas and volumes of circular geometric shapes.

Lessons Learned?

The founders of science, math, and physics fought and argued even under the threat of death to share what we take for granted today.  I am also amazed that their seemingly limitless imagination was not bound by the extremely limited resources available to them at the time.

They dared to dream and lived to tell about it.

When we look back through history, we see numerous sketches, concepts, and ideas that simply were not feasible at the time and if not recorded or documented would otherwise have been lost.  Record / document your ideas no matter how far fetched some of them may be – a valuable lesson learned, but seldom applied.

A knowledge database can be used to store ideas and concepts for future projects where implementation may be more practical or feasible.  Many times I hear teams discussing options they once considered but limited documentation only exists for the one selected.

I’m sure Archimedes had his reasons for calculating PI, but I have to laugh just a little as I wonder what he could have been thinking.  “I’m not sure what this means exactly but I’m sure someone will find a use for it someday.  We’ll just put that over here for now.”  And, 1,900 years later, Euler saying, “You’ll never believe what I found today.”

Remembering our roots.

Perhaps we don’t always give credit where credit is due and often overlook the true origins of our final solution.  On that note, Albert Einstein was also born on March 14, 1879 and would be celebrating his 132nd birthday as of this writing.  Oddly, his date of birth is loosely related to PI, 3.14.

I can only imagine what today’s technology would’ve done to accelerate progress during that time of discovery.  As I pondered that thought in the context of PI, I realized that they were already way ahead of their time and society just couldn’t or refused to keep up.

Time for some real PIe.

Until Next Time – STAY lean!

Vergence Analytics
Twitter:  @Versalytics
Advertisements

Brilliant Printing Technology

The foundation's logo.
Image via Wikipedia

Research and development take problem solving to whole new level where solutions have yet to be discovered and are often only imagined.  I am impressed by the relentless efforts of research teams that continue to develop and give rise to the emergence of  life saving, innovative technologies.

As our population ages and “baby boomers” enter into their retirement years, the lack of organ donations is quickly becoming a major concern.  Just when I thought the ability to print inanimate 3D objects, I was absolutely amazed by this TED talk that not only discusses, but demonstrates the ability to engineer and print life saving human tissue and organ structures.

If the video is not available above, Click here to view – “Anthony Atala:  Printing a Human Kidney”.

In this video, the problem and it’s solution have spanned the course of decades and continues to be resolved over time.  The persistence of the teams that pursue these solutions is to be admired.  I wonder how often we may have given up too soon – not knowing how close we were to finding that perfect solution.

Today, I’m thankful to those who continue their never-ending attempts to make the impossible possible to continually improve the quality of life for all humanity.

TED.com presents talks on a wide variety of diverse topics ranging from music, oceans, astronomy, space exploration, technology, medicine, and so much more.  I highly recommend subscribing and trust you will be as impressed as I have been for the years that I have been a member.

Until Next Time – STAY lean!

Vergence Analytics
Twitter:  @Versalytics

What did you expect? Benchmarking and Decisions – for better or worse.

Dice for various games, especially for rolepla...
Image via Wikipedia

What did you expect?

Benchmarking & Decisions – for better or worse

I recognize that benchmarking is not a new concept.  In business, we have learned to appreciate the value of benchmarking at the “macro level” through our deliberate attempts to establish a relative measure of performance, improvement, and even for competitor analysis.  Advertisers often use benchmarking as an integral component of their marketing strategy.

The discussion that follows will focus on the significance of benchmarking at the “micro level” – the application of benchmarking in our everyday decision processes.  In this context, “micro benchmarking” is a skill that we all possess and often take for granted – it is second nature to us.  I would even go so far as to suggest that some decisions are autonomous.

With this in mind, I intend to take a slightly different, although general, approach to introduce the concept of “micro benchmarking”.  I also contend that “micro benchmarking” can be used to introduce a new level of accountability to your organization.

Human Resources – The Art of Deception
Interviews and Border Crossing

Micro benchmarking can literally occur “in the moment.”  The interview process is one example where “micro benchmarking” frequently occurs.  I recently read an article titled, “Reading people: Signs border guards look for to spot deception“, and made particular note of the following advice to border crossing agents (emphasis added):

Find out about the person and establish their base-line behavior by asking about their commute in, their travel interests, etc. Note their body language during this stage as it is their norm against which all ensuing body language will be compared.

The interview process, whether for a job or crossing the border, represents one example where major (even life changing) decisions are made on the basis of very limited information.  As suggested in the article, one of the criteria is “relative change in behavior” from the norm established at the first greeting.  Although the person conducting a job interview may have more than just “body language” to work with, one of the objectives of the interview is to discern the truth – facts from fiction.

Obviously, the decision to permit entry into the country, or to hire someone, may have dire consequences, not only for the applicant, but also for you, your company, and even the country.  Our ability to benchmark at the micro level may be one of the more significant discriminating factors whereby our decisions are formulated.

Decisions – For Better or Worse:

Every decision we make in our lives is accompanied by some form of benchmarking.  While this statement may seem to be an over-generalization, let’s consider how decisions are actually made.  It is a common practice to “weigh our options” before making the final decision.  I suggest that every decision we make is rooted against some form of benchmarking exercise.  The decision process itself considers available inputs and potential outcomes (consequences):

  1. Better – Worse
  2. Pro’s – Con’s
  3. Advantages – Disadvantages
  4. Life – Death
  5. Success – Failure
  6. Safe – Risk

Decisions are usually intended to yield the best of all possible outcomes and, as suggested by the very short list above, they are based on “relative advantage” or “consequential” thinking processes.  At the heart of each of these decisions is a base line reference or “benchmark” whereby a good or presumably “correct” decision can be made.

We have been conditioned to believe (religion / teachings) and think (parents / education / social media / music) certain thoughts.  These “belief systems” or perceived “truths” serve as filters, in essence forming the base line or “benchmark” by which our thoughts, and hence our decisions, are processed.  Every word we read or hear is filtered against these “micro level” benchmarks.

I recognize that many other influences and factors exist but, suffice it to say, they are still based on a relative benchmark.  Unpopular decisions are just one example where social influences are heavily considered and weighed.  How many times have we heard, “The best decisions are not always popular ones.”  Politicians are known to make the tough and not so popular decisions early on in their term and rely on a waning public memory as the next election approaches – time heals all wounds but the scars remain.

Decisions – Measuring Outcomes

As alluded to in the last paragraph, our decision process may be biased as we consider the potential “reactions” or responses that may result.  Politics is rife with “poll” data that somehow sway the decisions that are made.  In a similar manner, substantially fewer issues of value are resolved in an election year for fear of a negative voter response.

In essence there are two primary outcomes to every decision, Reactions and Results.  The results of a decision are self-explanatory but may be classified as summarized below.

  1. Reactions – Noise (Social Aspects)
    • supporters
    • detractors
    • resistors
  2. Results – performance, data, facts (Business Aspects)
    • worse than expected (negative)
    • as expected (neutral)
    • better than expected (positive

Accountability

If you are still with me, I suggest that at least two levels of accountability exist:

  1. The process used to arrive at the decision
  2. The results of the decision

In corporations, large and small, executives are often held to account for worse than expected (negative) performance, where results are the primary – and seemingly only – focus of discussion.  I contend that positive results that exceed expectations should be subject to the same, if not higher, level of scrutiny.

Better and worse than expected results are both indicative of a lack of understanding or full comprehension of the process or system and as such present an opportunity for greater learning.  Predicting outcomes or results is a fundamental requirement and best practice where accountability is an inherent characteristic of company culture.

Toyota is notorious for continually deferring to the most basic measurement model:  Planned versus Actual.  Although positive (better than expected) results are more readily accepted than negative (worse than expected) results, both impact the business:

  • Better than expected:
    • Other potential investments may have been deferred based on the planned return on investment.
    • Financial statements are understated and affects other business aspects and transactions.
    • Decision model / process does not fully describe / consider all aspects to formulate planned / predictable results
      • Decision process to yield actual results cannot be duplicated unless lessons learned are pursued, understood, and the model is updated.
  • Worse than expected:
    • Poor / lower than expected return on investment
    • Extended financial obligations
    • Negative impact to cash flow / available cash
    • Lower stakeholder confidence for future investments
    • Decision model / process does not fully describe / consider all aspects to formulate planned / predictable results
      • Decision process will be duplicated unless lessons learned are pursued, understood, and the model is updated.

The second level of accountability and perhaps the most important concerns the process or decision model used to arrive at the decision.  In either case we want to discern between informed decisions, “educated guesses”, “wishful thinking”, or willful neglect.  We can see that individual and system / process level accountabilities exist.

The ultimate objective is to understand “what we were thinking” so we can repeat our successes without repeating our mistakes.  This seems to be a reasonable expectation and is a best practice for learning organizations.

Some companies are very quick to assign “blame” to individuals regardless of the reason for failure.  These situations can become very volatile and once again are best exemplified in the realm of politics.  There tends to be more leniency for individuals where policies or protocol has been followed.  If the system is broken, it is difficult to hold individuals to account.

The Accountability Solution – Show Your Work!

So, who is accountable?  Before you answer that, consider a person who used a decision model and the results were worse than the model predicted.  From a system point of view the person followed standard company protocol.  Now consider a person who did not use the model, knowing it was flawed, and the results were better than expected.  Both “failures” have their root in the same fundamental decision model.

The accountabilities introduced here however are somewhat different. The person following protocol has a traceable failure path.  In the latter case, the person introduced a new “untraceable” method – unless of course the person noted and advised of the flawed model before and not after the fact.

Toyota is one of the few companies I have worked with where documentation and attention to detail are paramount.  As another example, standardized work is not intended to serve as a rigid set of instructions that can never be changed. To the contrary, changes are permissible, however, the current state is the benchmark by which future performance is measured and proven.  The documentation serves as a tangible record to account for any changes made, for better or worse.

Throughout high school and college, we were always encouraged to “show our work”.  Some courses offered partial marks for the method although the final answer may have been wrong.  The opportunities for learning here however are greater than simply determining the student’s comprehension of the subject material.  To the contrary, it also offers an opportunity for the teacher to understand why the student failed to comprehend the subject matter and to determine whether the method used to teach the material could be improved.

Showing the work also demonstrates where the process break down occurred.  A wrong answer could have been due to a complete misunderstanding of the material or the result of a simple mis-entry on a calculator.  Why and how we make our decisions is just as important to understanding our expectations.

In conclusion

While the latter situations may be more typical of a macro level benchmark, I suggest that similar checks and balances occur even at the micro level.  As mentioned in the premise, some decisions may even be autonomous (snap decisions).   Examples of these decisions are public statements that all too often require an apology after the fact.  The sentiments for doing so usually include, “I’m sorry, I didn’t know what I was thinking.”  I am always amazed to learn that we may even fail to keep ourselves informed of what we’re thinking sometimes.

Until Next Time – STAY lean!

Vergence Analytics

Twitter:  @Versalytics

Integrated Waste: Lather, Rinse, Repeat

shampoo
Image via Wikipedia

Admittedly, it has been a while since I checked a shampoo bottle for directions, however, I do recall a time in my life reading:  Lather, Rinse, Repeat.  Curiously, they don’t say when or how many times the process needs to be repeated.

Perhaps someone can educate me as to why it is necessary to repeat the process at all – other than “daily”.  I also note that this is the only domestic “washing” process that requires repeating the exact same steps.  Hands, bodies, dishes, cars, laundry, floors, and even pets are typically washed only once per occasion.

The intent of this post is not to debate the effectiveness of shampoo or to determine whether this is just a marketing scheme to sell more product.  The point of the example is this:  simply following the process as defined is, in my opinion, inherently wasteful of product, water, and time – literally, money down the drain.

Some shampoo companies may have changed the final step in the process to “repeat as necessary” but that still presents a degree of uncertainty and assures that exceptions to the new standard process of “Lather, Rinse, and Repeat as Necessary” are likely to occur.

In the spirit of continuous improvement, new 2-in-1 and even 3-in-1 products are available on the market today that serve as the complete “shower solution” in one bottle.  As these are also my products of choice, I can advise that these products do not include directions for use.

Scratching the Surface

As lean practitioners, we need to position ourselves to think outside of the box and challenge the status quo.  This includes the manner in which processes and tasks are executed.  In other words, we not only need to assess what is happening, we also need to understand why and how.

One of the reasons I am concerned with process audits is that conformance to the prescribed systems, procedures, or “Standard Work” somehow suggests that operations are efficient and effective.  In my opinion, nothing could be further from the truth.

To compound matters, in cases where non-conformances are identified, often times the team is too eager to fix (“patch”) the immediate process without considering the implications to the system as a whole.  I present an example of this in the next section.

The only hint of encouragement that satisfactory audits offer is this: “People will perform the tasks as directed by the standard work – whether it is correct or not.”  Of course this assumes that procedures were based on people performing the work as designed or intended as opposed to documenting existing habits and behaviors to assure conformance.

Examining current systems and procedures at the process level only serves to scratch the surface.  First hand process reviews are an absolute necessity to identify opportunities for improvement and must consider the system or process as a whole as you will see in the following example.

Manufacturing – Another Example

On one occasion, I was facilitating a preparatory “process walk” with the management team of a parts manufacturer.  As we visited each step of the process, we observed the team members while they worked and listened intently as they described what they do.

As we were nearing the end of the walk through, I noted that one of the last process steps was “Certification”, where parts are subject to 100% inspection and rework / repair as required.  After being certified, the parts were placed into a container marked “100% Certified” then sent to the warehouse – ready for shipping to the customer.

When I asked about the certification process, I was advised that:  “We’ve always had problems with these parts and, whenever the customer complained, we had to certify them all 100% … ‘technical debate and more process intensive discussions followed here’ … so we moved the inspection into the line to make sure everything was good before it went in the box.”

Sadly, when I asked how long they’ve been running like this, the answer was no different from the ones I’ve heard so many times before:  “Years”.  So, because of past customer problems and the failure to identify true root causes and implement permanent corrective actions to resolve the issues, this manufacturer decided to absorb the “waste” into the “normal” production process and make it an integral part of the “standard operating procedure.”

To be clear, just when you thought I picked any easy one, the real problem is not the certification process.  To the contrary, the real problem is in the “… ‘technical debate and more process intensive discussions followed here’ …” portion of the response.  Simply asking about the certification requirement was scratching the surface.  We need to …

Get Below the Surface

I have always said that the quality of a product is only as good as the process that makes it.  So, as expected, the process is usually where we find the real opportunities to improve.  From the manufacturing example above, we clearly had a bigger problem to contend with than simply “sorting and certifying” parts.  On a broader scale, the problems I personally faced were two-fold:

  1. The actual manufacturing processes with their inherent quality issues and,
  2. The Team’s seemingly firm stance that the processes couldn’t be improved.

After some discussion and more debate, we agreed to develop a process improvement strategy.  Working with the team, we created a detailed process flow and Value Stream Map of the current process.  We then developed a Value Stream Map of the Ideal State process.  Although we did identify other opportunities to improve, it is important to note that the ideal state did not include “certification”.

I worked with the team to facilitate a series of problem solving workshops where we identified and confirmed root causes, conducted experiments, performed statistical analyses, developed / verified solutions, implemented permanent corrective actions, completed detailed process reviews and conducted time studies.  Over the course of 6 months, progressive / incremental process improvements were made and ultimately the “certification” step was eliminated from the process.

We continued to review and improve other aspects of the process, supporting systems, and infrastructure as well including, but not limited to:  materials planning and logistics, purchasing, scheduling, inventory controls, part storage, preventive maintenance, redefined and refined process controls, all supported by documented work instructions as required.  We also evaluated key performance indicators.  Some were eliminated while new ones, such as Overall Equipment Effectiveness, were introduced.

Summary

Some of the tooling changes to achieve the planned / desired results were extensive.  One new tool was required while major and minor changes were required on others.  The real tangible cost savings were very significant and offset the investment / expense many times over.  In this case, we were fortunate that new jobs being launched at the plant could absorb the displaced labor resulting from the improvements made.

Every aspect of the process demonstrated improved performance and ultimately increased throughput.  The final proof of success was also reflected on the bottom line.  In time, other key performance indicators reflected major improvements as well, including quality (low single digit defective parts per million, significantly reduced scrap and rework), increased Overall Equipment Effectiveness (Availability, Performance, and Quality), increased inventory turns, improved delivery performance (100% on time – in full), reduced overtime,  and more importantly – improved morale.

Conclusion

I have managed many successful turnarounds in manufacturing over the course of my career and, although the problems we face are often unique, the challenge remains the same:  to continually improve throughput by eliminating non-value added waste.  Of course, none of this is possible without the support of senior management and full cooperation of the team.

While it is great to see plants that are clean and organized, be forewarned that looks can be deceiving.  What we perceive may be far from efficient or effective.  In the end, the proof of wisdom is in the result.

Until Next Time – STAY lean!

Vergence Analytics
Twitter:  @Versalytics

Critical Process Triggers

Critical Triggers

It is inevitable that failures will occur and it is only a matter of time before we are confronted with their effects.  Our concern regards our ability to anticipate and respond to failures when they occur.  How soon is too soon to respond to a change or shift in the process?  Do we shut down the process at the very instant a defect is discovered?  How do we know what conditions warrant an immediate response?

The quality of a product is directly dependent on the manufacturing process used to produce it and, as we know all too well, tooling, equipment, and machines are subject to wear, tear, and infinitely variable operating parameters.  As a result, it is imperative to understand those process parameters and conditions that must be monitored and to develop effective responses or corrective actions to mitigate any negative direct or indirect effects.

Statistical process control techniques have been used by many companies to monitor and manage product quality for years.  Average-Range and Individual-Moving Range charts, to name a few, have been used to identify trends that are indicative of process changes.  When certain control limits or conditions are exceeded, production is stopped and appropriate corrective actions are taken to resolve the concern.  Typically the corrective actions are recorded directly on the control chart.

Process parameters and product characteristics may be closely correlated, however, few companies make the transition to solely relying on process parameters alone.  One reason for this is the lack of available data, more specifically at launch, to establish effective operating ranges for process parameters.  While techniques such as Design of Experiments can be used, the limited data set rarely provides an adequate sample size for conclusive or definitive parameter ranges to be determined for long-term use.

Learning In Real-Time

It is always in our best interest to use the limited data that is available to establish a measurement baseline.  The absence of extensive history does not exempt us from making “calculated” adjustments to our process parameters.  The objective of measuring and monitoring our processes  and product characteristics is to learn how our processes are behaving in real-time.  In too many cases, however, operating ranges have not evolved with the product development cycle.

Although we may not have established the full operating range, any changes outside of historically observed settings should be cause for review and possibly cause for concern.  Again, the objective is to learn from any changes or deviations that are not within the scope of the current operating condition.

Trigger Events

A trigger event occurs whenever a condition exceeds established process parameters or operating conditions.  This includes failure to follow prescribed or standardized work instructions.  Failing to understand why the “new” condition developed, is needed, or must be accepted jeopardizes process integrity and the opportunity for learning may be lost.

Our ability to detect or sense “abnormal” process conditions is critical to maintain effective process controls.  A disciplined approach is required to ensure that any deviations from normal operating conditions are thoroughly reviewed and understood with applicable levels of accountability.

An immediate response is required whenever a Trigger Event occurs to facilitate the greatest opportunity for learning.  “Cold Case” investigations based on speculation tend to align facts with a given theory rather than determining a theory based solely on the facts themselves.

Recurring variances or previously observed deviations within the normal process may be cause for further investigation and review.  As mentioned in previous posts, “Variance – OEE’s Silent Partner” and “OEE in an Imperfect World“, one of our objectives is to reduce or eliminate variance in our processes.

Interactions and Coupling

When we consider the definition of normal operating conditions, we must be cognizant of possible interactions.  Two conditions observed during separate events may actually create chaos if the events actually occurred at the same time.  I have observed multiple equipment failures where we subsequently learned that two machines on the same electrical grid cycled at the exact same time.  One machine continued to cycle without incident while a catastrophic failure occurred on the other.

Although the chance of cycling the machines at the exact same moment was slim and deemed not to be a concern, reality proved otherwise.  Note that monitoring each machine separately showed no signs of abnormal operation or excessive power spikes.  One of the machines (a welder) was moved to a different location in the plant operating on a separate power grid.  No failures were observed following the separation.

Another situation occurred where multiple machines were attached to a common hydraulic system.  Under normal circumstances up to 70% of the machines were operating at any given time.  On some occasions it was noted that an increase in quality defects occurred with a corresponding decrease in throughput although no changes were made to the machines.  In retrospect, the team learned that almost all of the machines (90%) were running.  Later investigation showed that the hydraulic system could not maintain a consistent system pressure when all machines were in operation.  To overcome this condition, boosters were added to each of the hydraulic drops to stabilize the local pressure at the machine.

To summarize our findings here, we need to make sure we understand the system as a whole as well as the isolated machine specific parameters.  Any potential interactions or affects of process coupling must be considered in the overall analysis.

Reporting

I recommend using a simple reporting system to gather the facts and relevant data.  The objective is to gain sufficient data to allow for an effective review and assessment of the trigger condition and to better understand why it occurred.

It is important to note that a trigger event does not automatically imply that product is non-conforming.  It is very possible, especially during new product launches, that the full range of operating parameters has not yet been realized.  As such, we simply want to ensure that we are not changing parameters arbitrarily without exercising due diligence to ensure that all effects of the change are understood.

Toyota Update

After a 10 month investigation into the cause of “Sudden Unintended Acceleration”, the results of the Federal Investigation were finally released on February 8, 2011, stating that no electronic source was found to cause the problem.  According to a statement released by Toyota,  “Toyota welcomes the findings of NASA and NHTSA regarding our Electronic Throttle Control System with intelligence (ETCS-i) and we appreciate the thoroughness of their review.”

The findings do,however, implicate some form of mechanical failure and do not necessarily rule out driver error.  It is foreseeable that a mechanical failure could be cause for concern and was seriously considered as part of Toyota’s initial investigation and findings that also included a concern with floor mats.  While the problem is very real, the root cause may still remain to be a mystery and although the timeline for this problem has extended for more than a year, it demonstrates the importance of gathering as much vital evidence as possible as events are unfolding.

A Follow Up to Sustainability

When a product has reached maximum market penetration it becomes vulnerable.  According to USA Today, “Activision announced it was cancelling a 2011 release of its massive music series Guitar Hero and breaking up the franchise’s business unit citing profitability as a concern.”

I find it hard to imagine all of the Guitar Hero games now becoming obsolete and eventual trash.  The life span of the product has exceeded the company’s ability to support it.  This is a sad state of affairs.

Until Next Time – STAY lean!

Vergence Analytics

Twitter:  @Versalytics

OEE in an imperfect world

A selection of Normal Distribution Probability...
Image via Wikipedia

Background: This is a more general presentation of “Variation:  OEE’s Silent Partner” published on January 31, 2011.

In a perfect world we can produce quality parts at rate, on time, every time.  In reality, however, all aspects of our processes are subject to variation that affects each factor of Overall Equipment Effectiveness:  Availability, Performance, and Quality.

Our ability to effectively implement Preventive Maintenance programs and Quality Management Systems is reflected in our ability to control and improve our processes, eliminate or reduce variation, and increase throughput.

The Variance Factor

Every process and measurement is subject to variation and error.  It is only reasonable to expect metrics such as Overall Equipment Effectiveness and Labour Efficiency will also exhibit variance.  The normal distribution for four (4) different data sets are represented by the graphic that accompanies this post.  You will note that the average for 3 of the curves (Blue, Red, and Yellow) is common (u = 0) and the shapes of the curves are radically different.  The green curve shows a normal distribution that is shifted to the left, the average (u) is -2, although we can see that the standard deviation for this distribution is better than that of the yellow and red curves.

The graphic also allows us to see the relationship between the Standard Deviation and the shape of curve.  As the Standard Deviation increases, the height decreases and the width increases.  From these simple representations, we can see that our objective is to reduce to the standard deviation.  The only way to do this is to reduce or eliminate variation in our processes.

We can use a variety of statistical measurements to help us determine or describe the amount of variation we may expect to see.  Although we are not expected to become experts in statistics, most of us should already be familiar with the normal distribution or “bell curve” and terms such as Average, Range, Standard Deviation, Variance, Skewness, and Kurtosis.  In the absence of an actual graphic, these terms help us to picture what the distribution of data may look like in our mind’s eye.

Run Time Data

The simplest common denominator and readily available measurement for production is the quantity of good parts produced.  Many companies have real-time displays that show quantity produced and in some cases go so far as to display Overall Equipment Effectiveness (OEE) and it’s factors – Availability, Performance, and Quality.  While the expense of live streaming data displays can be difficult to justify, there is no reason to abandon the intent that such systems bring to the shop floor.   Equivalent means of reporting can be achieved using “whiteboards” or other forms of data collection.

I am concerned with any system that is based solely on cumulative shift or run data that does not include run time history.  As such, an often overlooked opportunity for improvement is the lack of stability in productivity or throughput over the course of the run.  Systems with run time data allow us to identify production patterns, significant swings in throughput, and to correlate this data with down time history.  This production story board allows us to analyze sources of instability, identify root causes, and implement timely and effective corrective actions.  For processes where throughput is highly unstable, I recommend a direct hands-on review on the shop floor in lieu of post production data analysis.

Overall Equipment Effectiveness

Overall Equipment Effectiveness and the factors Availability, Performance, and Quality do not adequately or fully describe the capability of the production process.  Reporting on the change in standard deviation as well as OEE provides a more meaningful understanding of the process  and its inherent capability.

Improved capability also improves our ability to predict process throughput.  Your materials / production control team will certainly appreciate any improvements to stabilize process throughput as we strive to be more responsive to customer demand and reduce inventories.

Until Next Time – STAY lean!

Vergence Analytics

Twitter:  @Versalytics

Variance – OEE’s Silent Partner (Killer)

Example of two sample populations with the sam...
Image via Wikipedia

I was recently involved in a discussion regarding the value of Overall Equipment Effectiveness (OEE).  Of course, I fully supported OEE and confirmed that it can bring tremendous value to any organization that is prepared to embrace it as a key metric.  I also qualified my response by stating that OEE cannot be managed in isolation:

OEE and it’s intrinsic factors, Availability, Performance, and Quality are summary level indices and do not measure or provide any indication of process stability or capability

As a top level metric, OEE does not describe or provide a sense of actual run-time performance.  For example, when reviewing Availability, we have no sense of duration or frequency of down time events, only the net result.  In other words we can’t discern whether downtime was the result of a single event or the cumulative result of more frequent down time events over the course of the run.  Similarly, when reviewing Performance, we cannot accurately determine the actual cycle time or run rate, only the net result.

As shown in the accompanying graphic, two data sets (represented by Red and Blue) having the same average can present very different distributions as depicted by the range of data, height of the curve (kurtosis), width or spread of the curve (skewness), and significantly different standard deviations.

Clearly, any conclusions regarding the process simply based on averages would be very misleading.  In this same context, it is also clear that we must exercise caution when attempting to compare or analyse OEE results without first considering a statistical analysis or representation of the raw process data itself.

The Missing Metrics

Fortunately, we can use statistical tools to analyse run-time performance to determine whether our process is capable of consistently producing parts just as Quality Assurance personnel use statistical analysis tools to determine whether a process is capable of producing parts consistently.

One of the greatest opportunities for improving OEE is to use statistical tools to identify opportunities to reduce throughput variance during the production run.

Run-Time or throughput variance is OEE’s silent partner as it is an often overlooked aspect of production data analysis.  Striving to achieve consistent part to part cycle times and consistent hour to hour throughput rates is the most fundamental strategy to successfully improve OEE.  You will note that increasing throughput requires a focus on the same factors as OEE: Availability, Performance, and Quality.  In essence, efforts to improve throughput will yield corresponding improvements in OEE.

Simple throughput variance can readily be measured using Planned versus Actual Quantities produced – either over fixed periods of time and is preferred or cumulatively.  Some of the benefits of using quantity based measurement are as follows:

  1. Everyone on the shop floor understands quantity or units produced,
  2. This information is usually readily available at the work station,
  3. Everyone can understand or appreciate it’s value in tangible terms,
  4. Quantity measurements are less prone to error, and
  5. Quantities can be verified (Inventory) after the fact.

For the sake of simplicity, consider measuring hourly process throughput and calculating the average, range, and standard deviation of this hourly data.  With reference to the graphic above, even this fundamental data can provide a much more comprehensive and improved perspective of process stability or capability than would otherwise be afforded by a simple OEE index.

Using this data, our objective is to identify those times where the greatest throughput changes occurred and to determine what improvements or changes can be implemented to achieve consistent throughput.  We can then focus our efforts on improvements to achieve a more predictable and stable process, in turn improving our capability.

In OEE terms, we are focusing our efforts to eliminate or reduce variation in throughput by improving:

  1. Availability by eliminating or minimizing equipment downtime,
  2. Performance through consistent cycle to cycle task execution, and
  3. Quality by eliminating the potential for defects at the source.

Measuring Capability

To make sure we’re on the same page, let’s take a look at the basic formulas that may be used to calculate Process Capability.  In the automotive industry, suppliers may be required to demonstrate process capability for certain customer designated product characteristics or features.  When analyzing this data, two sets of capability formulas are commonly used:

  1. Preliminary (Pp) or Long Term (Cp) Capability:  Determines whether the product can be produced within the required tolerance range,
    • Pp or Cp = (Upper Specification Limit – Lower Specification Limit) / (6 x Standard Deviation)
  2. Preliminary (Ppk) or Long Term (Cpk) Capability:  Determines whether product can be produced at the target dimension and within the required tolerance range:
    • Capability = Minimum of Either:
      • Capability Upper = (Average + Upper Specification Limit) / (3 x Standard Deviation)
      • Capability Lower = (Lower Specification Limit – Average) / 3 x Standard Deviation)

When Pp = Ppk or Cp = Cpk, we can conclude that the process is centered on the target or nominal dimension.  Typically, the minimum acceptable Capability Index (Cpk) is 1.67 and implies that the process is capable of producing parts that conform to customer requirements.

In our case we are measuring quantities or throughput data, not physical part dimensions, so we can calculate the standard deviation of the collected data to determine our own “natural” limits (6 x Standard Deviation). Regardless of how we choose to present the data, our primary concern is to improve or reduce the standard deviation over time and from run to run.

Once we have a statistical model of our process, control charts can be created that in turn are used to monitor future production runs.  This provides the shop floor with a visual base line using historical data (average / limits) on which improvement targets can be made and measured in real-time.

Run-Time Variance Review

I recall using this strategy to achieve literally monumental gains – a three shift operation with considerable instability became an extremely capable and stable two shift production operation coupled with a one shift preventive maintenance / change over team.  Month over month improvements were noted by significantly improved capability data (substantially reduced Standard Deviation) and marked increases in OEE.

Process run-time charts with statistical controls were implemented for quantities produced just as the Quality department maintains SPC charts on the floor for product data.  The shop floor personnel understood the relationship between quantity of good parts produced and how this would ultimately affect the department OEE as well.

Monitoring quantities of good parts produced over shorter fixed time intervals is more effective than a cumulative counter that tracks performance over the course of the shift.  In this specific case, the quantity was “reset” for each hour of production essentially creating hourly in lieu of shift targets or goals.

Recording / plotting production quantities at fixed time intervals combined with notes to document specific process events creates a running production story board that can be used to identify patterns and other process anomalies that would otherwise be obscured.

Conclusion

I am hopeful that this post has heightened your awareness regarding the data that is represented by our chosen metrics.  In the boardroom, metrics are often viewed as absolute values coupled with a definitive sense of sterility.

Run-Time Variance also introduces a new perspective when attempting to compare OEE between shifts, departments, and factories.  From the context of this post, having OEE indices of the same value does not imply equality.  As we can see, metrics are not pure and perhaps even less so when managed in isolation.

Variance is indeed OEE’s Silent Partner but left unattended, Variance is also OEE’s Silent Killer.

Until Next Time – STAY lean!

Vergence Analytics

Twitter:  @Versalytics

Discover Toyota’s Best Practice

The new headquarters of the Toyota Motor Corpo...
The new headquarters of the Toyota Motor Corporation, opened in February 2005 in Toyota City. (Photo credit: Wikipedia)

I have always been impressed by Toyota’s inherent ability to adapt, improve, and embrace change even during the harshest times.  This innate ability is a signature trait of Toyota’s culture and has been the topic of intense study and research for many years.

How is it that Toyota continues to thrive regardless of the circumstances they encounter?  While numerous authors and lean practitioners have studied Toyota’s systems and shared best practices, all too many have missed the underlying strategy behind Toyota’s ever evolving systems and processes.  As a result, we are usually provided with ready to use solutions, countermeasures, prescriptive procedures, and forms that are quickly adopted and added to our set of lean tools.

The true discovery occurs when we realize that these forms and procedures are the product or outcome of an underlying systemic thought process.  This is where the true learning and process transformations take place.  In many respects this is similar to an artist who produces a painting.  While we can enjoy the product of the artist’s talent, we can only wonder how the original painting appears in the artist’s mind.

Of the many books that have been published about Toyota, there is one book that has finally managed to capture and succinctly convey the strategy responsible for the culture that presently defines Toyota.  Written by Mike Rother, “Toyota Kata – Managing People For Improvement, Adaptiveness, and Superior Results” reveals the methodology used to develop people at all levels of the Toyota organization.

Surprisingly, the specific techniques described in the book are not new, however, the manner in which they are used does not necessarily follow conventional wisdom or industry practice.  Throughout the book, it becomes evidently clear that the current practices at Toyota are the product of a collection of improvements, each building on the results of previous steps taken toward a seemingly elusive target.

Although we have gleaned and adopted many of Toyota’s best practices into our own operations, we do not have the benefit of the lessons learned nor do we fully understand the circumstances that led to the creation of these practices as we know them today.  As such, we are only exposed to one step of possibly many more to follow that may yield yet another radical and significantly different solution.

In simpler terms, the solutions we observe in Toyota today are only a glimpse of the current level of learning.  In the spirit of the improvement kata, it stands to reason that everything is subject to change.  The one constant throughout the entire process is the improvement kata or routine that is continually practiced to yield even greater improvements and results.

If you or your company are looking for a practical, hands on, proven strategy to sustain and improve your current operations then this book, “Toyota Kata – Managing People For Improvement, Adaptiveness, and Superior Results“, is the one for you.  The improvement kata is only part of the equation.  The coaching kata is also discussed at length and reveals Toyota’s implementation and training methods to assure the whole company mindset is engaged with the process.

Why are we just learning of this practice now?  The answer is quite simple.  The method itself is practiced by every Toyota employee at such a frequency that it has become second nature to them and trained into the culture itself.  While the tools that are used to support the practice are known and widely used in industry, the system responsible for creating them has been obscure from view – until now.

You can preview the book by simply clicking on the links in our post.  Transforming the culture in your company begins by adding this book, “Toyota Kata – Managing People For Improvement, Adaptiveness, and Superior Results”, to your lean library.  I have been practicing the improvement and coaching kata for some time and the results are impressive.  The ability to engage and sustain all employees in the company is supported by the simplicity of the kata model itself. For those who are more ambitious, you may be interested in the Toyota Kata Training offered by the University of Michigan.

Learning and practicing the Toyota improvement kata is a strategy for company leadership to embrace.  To do otherwise is simply waiting to copy the competition.  I have yet to see a company vision statement where the ultimate goal is to be second best.

Until Next Time – STAY lean!

Vergence Analytics

OEE: The Means to an End – Differentiation Where It Matters Most

A pit stop at the Autrodomo Nazionale of Monza...
Image via Wikipedia

Does your organization focus on results or the means to achieve them?  Do you know when you’re having a good day?  Are your processes improving?

The reality is that too many opportunities are missed by simply focusing on results alone.  As we have discussed in many of our posts on problem solving and continuous improvement, the actions you take now will determine the results you achieve today and in the future. Focus on the means of making the product and the results are sure to follow.

Does it not make sense to measure the progress of actions and events in real-time that will affect the end result? Would it not make more sense to monitor our processes similar to the way we use Statistical Process Control techniques to measure current quality levels?  Is it possible to establish certain “conditions” that are indicative of success or failure at prescribed intervals as opposed to waiting for the run to finish?

By way of analogy, consider a team competing in a championship race.  While the objective is to win the race, we can be certain that each lap is timed to the fraction of a second and each pit stop is scrutinized for opportunities to reduce time off the track.  We can also be sure that fine tuning of the process and other small corrections are being made as the race progresses.  If performed correctly and faster than the competition, the actions taken will ultimately lead to victory.

Similarly, does it not make sense to monitor OEE in realtime? If it is not possible or feasible to monitor OEE itself , is it possible to measure the components – Availability, Performance, and Quality – in real-time?  I would suggest that we can.

Performance metrics may include production and quality targets based on lapsed production time. If the targets are hit at the prescribed intervals, then the desired OEE should also be realized.  If certain targets are missed, an escalation process can be initiated to involve the appropriate levels of support to immediately and effectively resolve the concerns.

A higher reporting frequency or shorter time interval provides the opportunity to make smaller (minor) corrections in real-time and to capture relevant information for events that negatively affect OEE.

Improving OEE in real-time requires a skilled team that is capable of trouble shooting and solving problems in real-time. So, resolving concerns and making effective corrective actions in real-time is as important to improving OEE than the data collection process itself.

A lot of time, energy, and resources are expended to collect and analyze data. Unfortunately, when the result is finalized, the opportunity to change it is lost to history.  The absence of event-driven data collection and after the fact analysis leads to greater speculation regarding the events that “may have” occurred versus those events that actually did.

Clearly, an end of run pathology is more meaningful when the data supporting the run represents the events as they are recorded in real-time when they actually occurred.  This data affords a greater opportunity to dissect the events themselves and delve into a deeper analysis that may yield opportunities for long-term improvements.

Set yourself apart from the competition.  Focus on the process while it is running and make improvements in real-time.  The results will speak for themselves.

Your feedback matters

If you have any comments, questions, or topics you would like us to address, please feel free to leave your comment in the space below or email us at feedback@leanexecution.ca or feedback@versalytics.com.  We look forward to hearing from you and thank you for visiting.

Until Next Time – STAY lean

[twitter-follow screen_name=’Versalytics’ show_count=’yes’]

Versalytics Analytics
 

How Effective is Your Problem Solving?

The re-drawn chart comparing the various gradi...
Image via Wikipedia

Background

Of the many metrics that we use to manage our businesses, one area that is seldom measured is the effectiveness of the problem solving process itself.  We often engage a variety of problem solving tools such as 5-Why, Fishbone Diagrams, Fault Trees, Design of Experiments (DOE), or other forms of Statistical Analysis in our attempts to find an effective solution and implement permanent corrective actions.

Unfortunately, it is not uncommon for problems to persist even after the “fix” has been implemented.  Clearly, if the problem is recurring, either the problem was not adequately defined, the true root cause was not identified and verified correctly, or the corrective action (fix) required to address the root cause is inadequate.  While this seems simple enough, most lean practitioners recognize that solving problems is easier said than done.

Customers demand and expect defect free products and services from their suppliers.  To put it in simple terms, the mission for manufacturing is to:  “Safely produce a quality part at rate, delivered on time, in full.”  Our ability to attain the level of performance demanded by our mission and our customers is dependent on our ability to efficiently and effectively solve problems.

Metrics commonly used to measure supplier performance include Quality Defective Parts Per Million (PPM), Incident Rates, and Delivery Performance.  Persisting negative performance trends and repeat occurrences are indicative of ineffective problem solving strategy.  Our ability to identify and solve problems efficiently and effectively increases customer confidence and minimizes product and business risks.

Predictability

One of the objectives of your problem solving activities should be to predict or quantify the expected level of improvement.   The premise for predictability introduces a nuance of accountability to the problem solving process that may otherwise be non-existent.  In order to predict the outcome, the team must learn and understand the implications of the specific improvements they are proposing and to the same extent what the present process state is lacking.

To effectively solve a problem requires a thorough understanding of the elements that comprise the ideal state required to generate the desired outcome.  From this perspective, it is our ability to discern or identify those items that do not meet the ideal state condition and address them as items for improvement.  If each of these elements could also be quantified in terms of contribution to the ideal state, then a further refinement in predictability can be achieved.

The ability to predict an outcome is predicated on the existence of a certain level of “wisdom”, knowledge, or understanding whereby a conclusion can be formulated.

Plan versus Actual

Measuring the effectiveness of the problem solving process can be achieved by comparing Planned versus Actual results. The ability to predict or plan for a specific result suggests an implicit level of prior knowledge exists to support or substantiate the outcome.

Fundamentally, the benefits of this methodology are three-fold as it measures:

  • How well we understand the process itself,
  • Our ability to adequately define the problem and effectively identify the true root cause, and
  • The effectiveness of solution.

Another benefit of this methodology is the level of inherent accountability.  Specific performance measurements demand a greater degree of integrity in the problem solving process and accountability is a self-induced attribute of most participants.

The ability for a person or team to accurately define, solve, and implement an effective solution with a high degree of success also serves as a measure of the individual’s or team’s level of understanding of that process.  From another perspective, it may serve as a measure of knowledge and learning yet to be acquired.

As you may expect, this strategy is not limited to solving quality problems and can be applied to any system or process.  This type of measurement system is used by most manufacturing facilities to measure planned versus actual parts produced and is directly correlated to overall equipment effectiveness or OEE.

Any company working in the automotive manufacturing sector recognizes that this methodology is an integral part of Toyota’s operating philosophy and for good reason.  As a learning organization, Toyota fully embraces opportunities to learn from variances to plan.

Performance expectations are methodically evaluated and calculated before engaging the resources of the company.  It is important to note that exceeding expectations is as much a cause for concern as falling short.  Failing to meet the planned target (high / low or over / under) indicates that a knowledge gap still exists.  The objective is to revisit the assumptions of the planning model and to learn where adjustments are required to generate a predictable outcome.

Steven Spear discusses these key attributes that differentiate industry leaders from the rest of the pack in his book titled The High Velocity Edge.

First Time Through Quality (FTQ)

FTQ can also be applied to problem solving efforts by measuring the number of iterations that were required before the final solution was achieved.  Just as customers have zero tolerance for repeat occurrences, we should come to expect the same level of performance and accountability from our internal resources.

Although the goal may be to achieve a 100% First Time Through Solution rate, be wary of Paralysis by Analysis while attempting to find the perfect solution.  The objective is to enhance the level of understanding of the problem and the intended solution not to bring the flow of ideas to a halt.  Too often, activity is confused with action.  To affect change, actions are required.  The goal is to implement effective, NOT JUST ANY, solutions.

Jishuken

Literally translated, Jishuken means “Self-Study”.  Prior to engaging external company resources, the person requesting a Jishuken event is expected to demonstrate that they have indeed become students of the process by learning and demonstrating their knowledge of the process or problem.  It pertains to the collaborative problem solving strategy after all internal efforts have been exhausted and external resources are deployed with ”fresh eyes” to share knowledge and attempt to achieve resolution.  While the end result does not appear to be “self study”, the prerequisite for Jishuken is “exhausting all internal efforts”.  In other words, the facility requesting outside resources must first strive to become experts themselves.

Summary

Many companies limit their formal problem solving activities to the realm of quality and traditional problem solving tools are only used when non-conforming or defective product has been reported by the customer.  Truly agile / lean companies work ahead of the curve and attempt to find a cure before a problem becomes a reality at the customer level.

With this in mind, it stands to reason that any attempt to improve Overall Equipment Effectiveness or OEE also requires some form of problem solving that, in turn, can affect a positive change to one or all of the components that comprise OEE:  Availability, Performance, and First Time Through Quality.

As a reminder, OEE is the product of Availability (A) x Performance (P) x Quality (Q) and measures how effectively the available (scheduled) time was used to produce a quality product.  To get your free OEE tutorial or any one of our OEE templates, visit our Free Downloads page or pick the files you want from our free downloads box in the side bar.  You can easily customize these templates to suit your specific process or operation.

Many years ago I read a quote that simply stated,

“The proof of wisdom is in the results.”

And so it is.

Until Next Time – STAY lean!

Vergence Analytics