Tag: Process

The Point of No Return

The “Point of No Return” is a common expression that typically means you’ve reached an unrecoverable state if you continue to proceed with the current course of action.  When I clicked the “Publish” button for this post, I reached a point of no return (action).

From an accounting perspective, the term “Break Even” point is used to define the point where Total Costs equal Total Revenue.  The break even point translates to the quantity of parts that must be produced and sold to turn a profit.  Stock exchanges around the world serve as a constant reminder that investors are only concerned with PROFIT and return on investment (PROFIT).  In this context, a point of no return (profit) also exists.

Businesses exist for the customer or consumer.  Poor quality, missed delivery dates, short shipments, warranty returns, and poor customer service all lead to higher costs and may eventually cause customers to reach their “point of no return”.  Customers understand that the lowest price is not always the lowest cost option in the long run.  Business depends on repeat customers.

What’s the point?

In the simplest of terms, our actions must yield a return that is greater than the investment required to achieve it.  Delivering VALUE to the customer is one of the underlying principles of lean thinking and is measured by our ability to provide the highest quality products and services at the lowest possible cost, on time, delivered on time and in full.

This all sounds great on the surface but there will come a time where the cost to improve your systems and / or processes will exceed the return on investment – another point of no return.  Alternative, lower cost, solutions must be found to meet your continuous improvement objectives.

Where a significant capital investment is required, your company may require a payback period of one or two years.  A capital investment for a program that is soon to become obsolete is not a feasible option.  The point of no return (investment) is reached before any funding can even be considered.

The Bottom Line

Understandably, the team will become extremely frustrated when the very solution they proposed is rejected or declined.  While they may not doubt their own ability to provide viable solutions, they will doubt the company’s commitment to pursue excellence and continually improve.

For this reason, it is essential for the team to understand the reasons why.  It also underscores the need to identify and respond to improvement opportunities quickly and as early as possible during the launch cycle of any new system, process, or product.

Embrace Rejection

Rejection can sometimes be a gift.  As I have stated many times before, “There’s always a better way and more than one solution.”  Could it be that sometimes bad things happen for a good reason?

Rejection provides (forces) us with the opportunity to consider the present circumstances from a fresh perspective.  If the premise for the proposed solution was to “fix” the current system or process as it’s is now defined, perhaps a radically different and innovative system or process could better serve the company in the long term.

Is it possible that a new and lower cost alternative exists that could be at least as effective and perhaps even more efficient?  There are numerous examples of systems, processes, and technologies that exist today that were discovered by removing the limits that we unconsciously place on the scope of the problem that in turn limit the solutions we are able to develop.

The real problem with problem solving is the idea that the only solution is a “fix” to a system or process that is already be flawed from the onset.

Be Inspired

TED Talks are rife with examples of problem solving that yield radical and in some cases simple solutions.  The following TED Talks may serve to inspire you and your organization to look at problems and their solutions from a different perspective:

These TED Talks present problems on a different scope and scale than we may be accustomed to, however, the very discussion of alternatives alone should serve to inspire radical thinking that in turn inspires radical change.

You may have noticed from these TED Talks that some of the solutions presented were found outside of the context or circumstances from which the problem originated.  Is it possible that a “surrogate” solution exists elsewhere?

“Problems cannot be solved by the same level of thinking that created them.”

The point of no return is significant and literally requires “out of the box” thinking.  Many companies no longer grace our communities or employ our neighbours, losing business and opportunities for growth to lower cost manufacturers and distributors to continually emerging global economy.  The difference could very well be how we embrace the point of no return.

Consider that Toyota, as a new company to the North American automotive market, implemented innovative supply chain,  inventory management, and production techniques to remain competitive.  Radical change and innovation does not imply higher cost or investment.  At best it should simply imply “different”.

Other companies like Apple and GE managed to change their futures under the leadership of Steve Jobs and Jack Welch respectively.  Was it always pretty? Likely not from the books I’ve read.  However, the outcomes are undeniable.

The courage of Steve Jobs to solicit support from Microsoft’s Bill Gates was an extremely radical decision at the time.  This video, “Steve Jobs, Bill Gates and Microsoft – It’s Complicated“, clearly demonstrates the challenges faced in the relationship between Apple and Microsoft.  As for GE, I highly recommend reading “Straight From the Gut” by Jack Welch to best understand the radical changes in business and company culture during his tenure there.

Asking the right questions, open minds, radical thinking, and strong leadership coupled with a commitment to pursue excellence, continually improve, and solve problems may help everyone realize that the point of no return can be one of the greatest gifts you’ll ever receive.

To quote Albert Einstein, “A clever person solves a problem.  A wise person avoids it.” and so we … “look before we leap.”

Your feedback matters

If you have any comments, questions, or topics you would like us to address, please feel free to leave your comment in the space below or email us at feedback@leanexecution.ca or feedback@versalytics.com.  We look forward to hearing from you and thank you for visiting.

Until Next Time – STAY lean

Versalytics Analytics
Advertisements

Method Matters and OEE

English: This figure demonstrates the central ...
English: This figure demonstrates the central limit theorem. It illustrates that increasing sample sizes result in sample means which are more closely distributed about the population mean. It also compares the observed distributions with the distributions that would be expected for a normalized Gaussian distribution, and shows the reduced chi-squared values that quantify the goodness of the fit (the fit is good if the reduced chi-squared value is less than or approximately equal to one). (Photo credit: Wikipedia)

Tricks of the Trade

 

Work smarter not harder! If we’re honest with ourselves, we realize that sometimes we have a tendency to make things more difficult than they need to be. A statistics guru once asked me why a sample size of five (5) is commonly used when plotting X-Bar / Range charts. I didn’t really know the answer but assumed that there had to be a “statistically” valid reason for it. Do you know why?

 

Before calculators were common place, sample sizes of five (5) made it easier to calculate the average (X-Bar). Add the numbers together, double it, then move the decimal over one position to the left.  All of this could be done on a simple piece of paper, using some very basic math skills, making it possible for almost anyone to chart efficiently and effectively.

 

  1. Sample Measurements:
    1. 2.5
    2. 2.7
    3. 3.1
    4. 3.2
    5. 1.8
  2. Add them together:
    • 2.5+2.7+3.1+3.2+1.8 = 13.3
  3. Double the result:
    • 13.3 + 13.3 = 26.6
  4. Move the decimal one position to the left:
    • 2.66

To calculate the range of the sample size, we subtract the smallest value (1.8) from the largest value (3.2). Using the values in our example above, the range is 3.2 – 1.8 = 1.4.

 

The point of this example is not to teach you how to calculate Average and Range values. Rather, the example demonstrates that a simple method can make a relatively complex task easier to perform.

 

Speed of Execution

 

We’ve written extensively on the topic of Lean and Overall Equipment Effectiveness or OEE as means to improve asset utilization. However, the application of Lean thinking and OEE doesn’t have stop at the production floor.  Can the pursuit of excellence and effective asset utilization be applied to the front office too?

 

Today’s computers operate at different speeds depending on the manufacturer and installed chip set. Unfortunately, faster computers can make sloppy programming appear less so. In this regard, I’m always more than a little concerned with custom software solutions.

 

We recently worked on an assignment that required us to create unique combinations of numbers. We used a “mask” that is doubled after each iteration of the loop to determine whether a bit is set. This simple programming loop requiring this is also the kernel or core code of the application.  All computers work with bits and bytes.  One byte of data has 8 bit positions (0-7) and represents numeric values as follows:

 

  • 0 0 0 0 0 0 0 0 =   0
  • 0 0 0 0 0 0 0 1 =   1
  • 0 0 0 0 0 0 1 0 =   2
  • 0 0 0 0 0 1 0 0 =   4
  • 0 0 0 0 1 0 0 0 =   8
  • 0 0 0 1 0 0 0 0 =  16
  • 0 0 1 0 0 0 0 0 =  32
  • 0 1 0 0 0 0 0 0 =  64
  • 1 0 0 0 0 0 0 0 = 128

To determine whether a single bit is set, our objective is to test it as we generate the numbers 1, 2, 4, 8, 16, 32, 64 and so on – each representing a unique bit position in binary form . Since this setting and testing of bits is part of our core code, we need a method that can double a number very quickly:

 

  • Multiplication:  Multiply by Two, where x = x * 2
  • Addition:  Add the Number to Itself, where x = x + x

These seem like simple options, however, in computer terms, multiplying is slower than addition, and SHIFTing is faster than addition.  You may notice that every time we double a number, we’re simply shifting our single “1” bit to the left one position.  Most computers have a built in SHL instruction in the native machine code that is designed to do just that.  In this case, the speed of execution of our program will depend the language we choose and how close to the metal it allows us to get.  Not all languages provide for “bit” manipulation.  For this specific application, a compiled native assembly code routine would provide the fastest execution time.  Testing whether a bit is set can also be performed more efficiently using native assembly code.

 

Method Matters

 

The above examples demonstrate that different methods can be used to yield the same result.  Clearly, the cycle times will be different for each of the methods that we deploy as well.  This discussion matters from an Overall Equipment Effectiveness, OEE, perspective as well.  Just as companies focus on reducing setup time and eliminating quality problems, many also focus on improving cycle times.

 

Where operations are labour intensive, simply adding an extra person or more to the line may improve the cycle time.  Unless we change the cycle time in our process standard, the Performance Factor for OEE may exceed 100%.  If we use the ideal cycle time determined for our revised “method”, it is possible that the Performance Factor remains unchanged.

 

Last Words

 

The latter example demonstrates once again why OEE cannot be used in isolation.  Although an improvement to cycle time will create capacity, OEE results based on the new cycle time for a given process may not necessarily change.  Total Equpiment Effectiveness Performance (TEEP) will actually decrease as available capacity increases.

 

When we’re looking at OEE data in isolation, we may not necessarily the “improved” performance we were looking for – at least not in the form we expected to see it.  It is just as important to understand the process behind the “data” to engage in a meaningful discussion on OEE.

 

Your feedback matters

 

If you have any comments, questions, or topics you would like us to address, please feel free to leave your comment in the space below or email us at feedback@leanexecution.ca or feedback@versalytics.com.  We look forward to hearing from you and thank you for visiting.

 

Until Next Time – STAY lean

 

 

 

Versalytics Analytics

 

Variance – OEE’s Silent Partner (Killer)

Example of two sample populations with the sam...
Image via Wikipedia

I was recently involved in a discussion regarding the value of Overall Equipment Effectiveness (OEE).  Of course, I fully supported OEE and confirmed that it can bring tremendous value to any organization that is prepared to embrace it as a key metric.  I also qualified my response by stating that OEE cannot be managed in isolation:

OEE and it’s intrinsic factors, Availability, Performance, and Quality are summary level indices and do not measure or provide any indication of process stability or capability

As a top level metric, OEE does not describe or provide a sense of actual run-time performance.  For example, when reviewing Availability, we have no sense of duration or frequency of down time events, only the net result.  In other words we can’t discern whether downtime was the result of a single event or the cumulative result of more frequent down time events over the course of the run.  Similarly, when reviewing Performance, we cannot accurately determine the actual cycle time or run rate, only the net result.

As shown in the accompanying graphic, two data sets (represented by Red and Blue) having the same average can present very different distributions as depicted by the range of data, height of the curve (kurtosis), width or spread of the curve (skewness), and significantly different standard deviations.

Clearly, any conclusions regarding the process simply based on averages would be very misleading.  In this same context, it is also clear that we must exercise caution when attempting to compare or analyse OEE results without first considering a statistical analysis or representation of the raw process data itself.

The Missing Metrics

Fortunately, we can use statistical tools to analyse run-time performance to determine whether our process is capable of consistently producing parts just as Quality Assurance personnel use statistical analysis tools to determine whether a process is capable of producing parts consistently.

One of the greatest opportunities for improving OEE is to use statistical tools to identify opportunities to reduce throughput variance during the production run.

Run-Time or throughput variance is OEE’s silent partner as it is an often overlooked aspect of production data analysis.  Striving to achieve consistent part to part cycle times and consistent hour to hour throughput rates is the most fundamental strategy to successfully improve OEE.  You will note that increasing throughput requires a focus on the same factors as OEE: Availability, Performance, and Quality.  In essence, efforts to improve throughput will yield corresponding improvements in OEE.

Simple throughput variance can readily be measured using Planned versus Actual Quantities produced – either over fixed periods of time and is preferred or cumulatively.  Some of the benefits of using quantity based measurement are as follows:

  1. Everyone on the shop floor understands quantity or units produced,
  2. This information is usually readily available at the work station,
  3. Everyone can understand or appreciate it’s value in tangible terms,
  4. Quantity measurements are less prone to error, and
  5. Quantities can be verified (Inventory) after the fact.

For the sake of simplicity, consider measuring hourly process throughput and calculating the average, range, and standard deviation of this hourly data.  With reference to the graphic above, even this fundamental data can provide a much more comprehensive and improved perspective of process stability or capability than would otherwise be afforded by a simple OEE index.

Using this data, our objective is to identify those times where the greatest throughput changes occurred and to determine what improvements or changes can be implemented to achieve consistent throughput.  We can then focus our efforts on improvements to achieve a more predictable and stable process, in turn improving our capability.

In OEE terms, we are focusing our efforts to eliminate or reduce variation in throughput by improving:

  1. Availability by eliminating or minimizing equipment downtime,
  2. Performance through consistent cycle to cycle task execution, and
  3. Quality by eliminating the potential for defects at the source.

Measuring Capability

To make sure we’re on the same page, let’s take a look at the basic formulas that may be used to calculate Process Capability.  In the automotive industry, suppliers may be required to demonstrate process capability for certain customer designated product characteristics or features.  When analyzing this data, two sets of capability formulas are commonly used:

  1. Preliminary (Pp) or Long Term (Cp) Capability:  Determines whether the product can be produced within the required tolerance range,
    • Pp or Cp = (Upper Specification Limit – Lower Specification Limit) / (6 x Standard Deviation)
  2. Preliminary (Ppk) or Long Term (Cpk) Capability:  Determines whether product can be produced at the target dimension and within the required tolerance range:
    • Capability = Minimum of Either:
      • Capability Upper = (Average + Upper Specification Limit) / (3 x Standard Deviation)
      • Capability Lower = (Lower Specification Limit – Average) / 3 x Standard Deviation)

When Pp = Ppk or Cp = Cpk, we can conclude that the process is centered on the target or nominal dimension.  Typically, the minimum acceptable Capability Index (Cpk) is 1.67 and implies that the process is capable of producing parts that conform to customer requirements.

In our case we are measuring quantities or throughput data, not physical part dimensions, so we can calculate the standard deviation of the collected data to determine our own “natural” limits (6 x Standard Deviation). Regardless of how we choose to present the data, our primary concern is to improve or reduce the standard deviation over time and from run to run.

Once we have a statistical model of our process, control charts can be created that in turn are used to monitor future production runs.  This provides the shop floor with a visual base line using historical data (average / limits) on which improvement targets can be made and measured in real-time.

Run-Time Variance Review

I recall using this strategy to achieve literally monumental gains – a three shift operation with considerable instability became an extremely capable and stable two shift production operation coupled with a one shift preventive maintenance / change over team.  Month over month improvements were noted by significantly improved capability data (substantially reduced Standard Deviation) and marked increases in OEE.

Process run-time charts with statistical controls were implemented for quantities produced just as the Quality department maintains SPC charts on the floor for product data.  The shop floor personnel understood the relationship between quantity of good parts produced and how this would ultimately affect the department OEE as well.

Monitoring quantities of good parts produced over shorter fixed time intervals is more effective than a cumulative counter that tracks performance over the course of the shift.  In this specific case, the quantity was “reset” for each hour of production essentially creating hourly in lieu of shift targets or goals.

Recording / plotting production quantities at fixed time intervals combined with notes to document specific process events creates a running production story board that can be used to identify patterns and other process anomalies that would otherwise be obscured.

Conclusion

I am hopeful that this post has heightened your awareness regarding the data that is represented by our chosen metrics.  In the boardroom, metrics are often viewed as absolute values coupled with a definitive sense of sterility.

Run-Time Variance also introduces a new perspective when attempting to compare OEE between shifts, departments, and factories.  From the context of this post, having OEE indices of the same value does not imply equality.  As we can see, metrics are not pure and perhaps even less so when managed in isolation.

Variance is indeed OEE’s Silent Partner but left unattended, Variance is also OEE’s Silent Killer.

Until Next Time – STAY lean!

Vergence Analytics

Twitter:  @Versalytics

Killer Metrics

Dead plant in pots
Image via Wikipedia

Managing performance on any scale requires some form of measurement.  These measurements are often summarized into a single result that is commonly referred to as a metric.  Many businesses use tools such as dashboards or scorecards to present a summary or combination of multiple metrics into a single report.

While these reports and charts can be impressive and are capable of presenting an overwhelming amount of data, we must keep in mind what we are measuring and why.  Too many businesses are focused on outcome metrics without realizing that the true opportunity for performance improvement can be found at the process level itself.

The ability to measure and manage performance at the process level against a target condition is the strategy that we use to strive for successful outcomes.  To put it simply, some metrics are too far removed from the process to be effective and as such cannot be translated into actionable terms to make a positive difference.

Overall Equipment Effectiveness or OEE is an excellent example of an outcome metric that expresses how effectively equipment is used over time as percentage.  To demonstrate the difference between outcome and process level metrics, let’s take a deeper look at OEE.  To be clear, OEE is an outcome metric.  At the plant level, OEE represents an aggregate result of how effectively all of the equipment in the plant was used to produce quality parts at rate over the effective operating time.  Breaking OEE down into the individual components of Availability, Performance, and Quality may help to improve our understanding of where improvements can be made, but still does not serve to provide a specific direction or focus.

At the process level, Overall Equipment Effectiveness is a more practical metric and can serve to improve the operation of a specific work cell where a specific part number is being manufactured.  Clearly, it is more meaningful to equate Availability, Performance, and Quality to specific process level measurements.  We can monitor and improve very specific process conditions in real time that have a direct impact on the resulting Overall Equipment Effectiveness.  A process operating below the standard rate or producing non-conforming products or can immediately be rectified to reverse a potentially negative result.

This is not to say that process level metrics supersede outcome metrics.  Rather, we need to understand the role that each of these metrics play in our quest to achieve excellence.  Outcome metrics complement process level metrics and serve to confirm that “We are making a difference.”  Indeed, it is welcome news to learn that process level improvements have translated into plant level improvements.  In fact, as is the case with OEE, the process level and outcome metrics can be synonymous with a well executed implementation strategy.

I recommend using Overall Equipment Effectiveness throughout the organization as both a process level and an outcome level metric.  The raw OEE data at the process level serves as a direct input to the higher level “outcome” metrics (shift, department, plant, company wide).  As such, the results can be directly correlated to specific products and / or processes if necessary to create specific actionable steps.

So, you may be asking, “What are Killer Metrics?”  Hint:  To Measure ALL is to Manage NONE.  Choose your metrics wisely.

Until Next Time – STAY lean!

Vergence Analytics

Lean, OEE, and How to beat the “Law of Diminishing Returns”

Are your lean initiatives falling prey to the Law of Diminishing Returns?  Waning returns may soon be followed by apathy as the “new” initiative gets old.  For those who have not studied economics or are not familiar with the term, it is defined by Wikepedia as follows:

The law states “that we will get less and less extra output when we add additional doses of an input while holding other inputs fixed. In other words, the marginal product of each unit of input will decline as the amount of that input increases holding all other inputs constant.

In simple terms, continued application of time and effort to improve a process will eventually yield reduced or smaller returns.  The low hanging fruit that once was easy to see and resolve has all but disappeared.  Some companies would claim that they have finally “arrived”.  We contend that these same companies have simply hit their first plateau.

Methods and Objectives

Is it inevitable that a process has been refined to the point where additional investment can no longer be justified financially?  The short answer is “Yes and No”.  As the Olympics are well under way, we are quick to observe the fractions of seconds that may be shaved from current world records.  If you’re going for Olympic Gold, you will need every advancement or enhancement that technology has to offer to gain the competitive edge.  These advances in technology are refinements for existing processes that are governed by strict rules.  Clearly, there are much faster ways to get from point A to point B.  However, the objective of the Olympics is to demonstrate how these feats can be accomplished through the physical skills and abilities of the athletes.

In business our objectives are defined differently.  We want to provide (and our customers expect) the highest quality products at the lowest cost delivered in the shortest amount of time.  How we do that is up to us.  Lean initiatives and tools such as overall equipment effectiveness (OEE) can help us to refine current processes but are they enough to stimulate the development of new products and processes?  Or, are they limited to simply help us to recognize when optimum levels have been achieved?

Radical change versus refinement

Objectives are used to determine and align the methods that are used to achieve a successful outcome.  This is certainly the case in the automotive industry as environmental concerns and availability of non-renewable resources, specifically oil and gas, continue to gain global attention and focus.  The objectives of our “transportation” systems are being redefined almost dynamically as new technologies are beginning to emerge.  At some point, the automotive industry leaders must have realized that continuing to refine existing technologies simply will not satisfy future expectations.  With this realization it is now inevitable that a radical powertrain technology change is required.  Hybrid vehicles continue to evolve and electric cars are not too far behind.

How to Beat the Law of Diminishing Returns

Overcoming the law of diminishing returns requires another look at the vision, goals, and objectives of the company and to develop a new, different, or fresh perspective on what it is you are trying to achieve.  The lean initiatives introduced by Toyota, Walmart, Southwest and many others were driven by the need to find a competitive edge.  They recognized that they couldn’t simply be a “me too” company to gain the recognition and successes they now enjoy.

The question you may want to ask yourself and your team is, “If we started from scratch today, is this the result we would be looking for?”  The answer should be a unanimous and resounding “NO”.  Get out your whiteboard, pens, paper, and start writing down what you would be doing differently.  In other words, it’s time to re-energize the team and refocus your goals and objectives.  Vision and mission statements are not tombstones for the living.  5S these documents and take the time to re-invigorate your team.

Turning a company around may require some new radical changes and we need to be mindful of the new upstarts with the latest and greatest technology.  They may have an edge that we have may just haven’t taken the time to consider.  We are not suggesting that you need to replace all the equipment in your plant in order to compete.  Proven technologies have their place in industry and the competitive pricing isn’t always about speed.  The question you may need to consider is, “Can our technology be used to produce different products that have been traditionally manufactured using other methods?”

While many companies pursue a growth strategy based on their current product offerings and derivatives, we would strongly suggest that manufacturers consider a growth strategy based on their process technology offerings.  What else can we make with process or machine XYZ?  We anticipate that manufacturing sectors will soon start to blend as manufacturers pursue products beyond the scope of their current industry applications.

Be the Leader

Leading companies create and define the environment where their products and services will thrive.  Apple’s “iProducts” have redefined how electronics are used in everyday life.  As these tools are developed and evolve, so too can the systems and processes used throughout manufacturing.  The collective human mind is forever considering the possibilities of the next generation of products or services.

There was a time when manned space flight and walking on the moon were considered unlikely probabilities.  Today we find ourselves discovering and considering galaxies beyond our own and we don’t give it a second thought.  How far can we go and how do we get there?  The answer to that question is …

Until Next Time – STAY lean!

Toyota Recall: Quality versus Quantity!

There has been much speculation about what went wrong and what is still right at Toyota.   It has even been suggested that Toyota may have become blinded by the desire to be the number 1 automaker in the world.  This suggests that quality and quantity are interelated and that one will suffer at the expense of the other.  We would argue that this is simply not true in this case.  The question is, “What failed?”  Was it the product or the process?

Background Video:

We found an informative video about the Toyota acceleration issue that provides a little more insight than you may have read in the papers.   Click here to watch the video:

An Expert Opinion

We received another e-mail from Steven J. Spear, author of “Chasing the Rabbit“, that presents his perspective on what went “wrong” at Toyota.  Steven identifies 3 strategic areas that Toyota will need to address.  As we stated earlier, we don’t support the idea that Toyota grew too quickly.  The ability to effectively and efficiently produce  a product in mass quantities may be more challenging for the manufacturers but this should not have an impact the design or function of the product itself.

If a manufacturing defect was found to be the reason for the recall, we would agree that growth and increased demand for product may be a factor.  We’ve all been in situations where overtime is required to meet demand and the effect (stress and fatigue) this can have on employees over extended periods of time can be cause for concern.  All reports suggest that the recent flurry of recalls are driven by design or design related issues – NOT how quickly or how many vehicles were actually made.

The e-mail from Steven Spear follows:

Dear Colleagues,
 
What went wrong with Toyota is the flip side of what went right over so many decades. In the late 1950s or 1960s, Toyota was a pretty cruddy car company. The variety was meager, quality was poor, and their production efficiency was abysmal.
 
Yet by the time they hit everyone’s radar in the 1980s, they had very high quality and unmatched productivity. The way they got there was by creating within Toyota exceptionally aggressive learning. They taught employees specialties, but more importantly, they taught people to pay very close attention to the “weak signals” the products and processes were sending back about design flaws, and then responding with high-speed, compressed learning cycles to take things that were poorly understood and convert them into things that were understood quite deeply.
 
That allowed Toyota to come from behind, race through the pack, and establish itself as the standard-setter on quality and efficiency and complex technology. But since then, things have affected Toyota in terms of their ability to sustain this kind of aggressive learning. 
 
These include:
 
·      A rapid expansion in the number of people who had to be developed into aggressive learners with faster rates of business growth.
 
·      A rapid increase in the need for aggressive learning as the technological complexity of products and plants increased as well.
 
For more on this problem of overburdening the innovative capacity of an organization, please see my interview, “3 Questions: Steven Spear on Toyota’s Troubles,” conducted by the MIT News Office.
 
Best wishes,
Steve Spear
 

• “3 Questions: Steven Spear on Toyota’s Troubles,
   Interview with MIT News Office.
• “Toyota: Too Big, Too Fast,” by Gordon Pitts
   in The Globe and Mail (February 5, 2010)
• “Learning from Toyota’s Stumble,”
   e-article at HarvardBusiness.Org.
http://ChasingTheRabbitBook.com
  for preface, forward, intro, and blog.

Click here to get your copy of Chasing the Rabbit!

Our Opinion

We would support the idea that Toyota must revisit their advanced engineering and design processes to make sure that products released for production and the public in general are safe and robust.  If there is any weakness in the problem solving community, it may just be the disconnect between events that occur in the real world and the events that the Toyota engineering and design communities choose to acknowledge.  We would also suggest that any event that results in the loss of human life must be immediately and thoroughly investigated and evaluated.  To this end, perhaps Toyota’s size is compounded to maintain an effective communication strategy and is another area that should be revisited.

Tragedies occur on our highways every day and some of us may be tempted to categorize many of these accidents as “operator error”.  We have learned by working with Toyota and other automotive companies that “operator error” is not an acceptable root cause.  What is it that the operator did or didn’t do and, of course, why?  Consistent with Toyota’s “Plan versus Actual” thinking, the Advanced Design, Development, and Engineering strategy will be subject to a significant transformation as Toyota reflects on this experience.

Toyota has done a remarkable job of instituting manufacturing processes that we now know as lean.  In this respect, it is important not to confuse Toyota’s manufacturing strategy with design of the product itself.  Although design and process recalls may be related they can also be separable and unique.  Maple Leaf Foods announced a significant recall when deaths were linked to meat products found to be contaminated with listeriosis.  The recall in this case was directly attributed to the cleanliness of the equipment (process).  Stork Craft Drop-Side Cribs were also subject to recall in November of 2009 for products with manufacture and distribution dates spanning almost 16 years.  In this case, the product design, material selection, and installation methods were at fault.  History is rife with examples, including many from the original North American (Detroit) automakers.  

Most companies take responsibility for the quality of the products and services they provide.  We don’t accept the idea that, as consumers, we are vulnerable to a company’s ability to meet demand.  We expect Quality and Quantity – not one or the other.

Now that the Olympics have started, there will be other news that will keep us pre-occupied for the next few weeks.  During this time, the automakers can fix their respective problems and start selling cars again.

 Until Next Time – STAY lean!