An article in today’s Toronto Star titled “Surgeons given a hands-off way to Kinect” clearly demonstrates how improvements can be realized in our work environment. One of the concerns in the operating room is maintaining a sterile field during surgery. Doctors cannot physically touch any devices away from the sterile field for fear of breaking it and have only 1 of 2 choices if they need to review MRI’s or CT scans:
Scrub in and out every time, which according to the article can add up to two (2) hours per surgery, or
Hire an assistant to page through the records for them.
In the search for a better way, Matt Strickland, a first year surgical resident at the University of Toronto and electrical engineer, and Jamie Tremaine, a mechatronics engineer, who both studied engineering at the University of Waterloo, joined forces to help solve this problem. Together, they devised a system using the XBox Kinect with the help of Greg Brigley, a computer engineer and also a University of Waterloo graduate.
Using their technology, doctors can now scroll through as many as 4,000 documents using simple hand motions, literally integrating access to information into the surgical process without jeopardizing the sterile field.
Why is this significant?
Matt Strickland was the assistant providing the necessary “documents” to the doctors performing the surgery. This is a very impressive application of thinking outside of the box. I highly encourage you to read the article. Serendipity is seldom the source of repeatable innovations, however, in this instance we’ll take it just the same.
This example demonstrates another reason to include everyone in the problem solving process and also reaffirms that there is always a better way. You just don’t know where your next solution will find its roots.
On a final note, I have to wonder if the creators of XBox even considered this application!
In my article “Waste: The Devil is in the Details“, I discussed the importance of paying attention to the details. From a company or personal perspective, the underlying theme to identify waste (or opportunity) is to be continually cognizant of what it is we’re doing and asking “Why?”
I have continually stressed the importance of conducting process reviews right where the action is. It seems we’re not alone in this thinking and I thought it was quite fitting to share an e-mail I received from John Shook:
Decompressing now from last week’s Lean Transformation Summit in Dallas, there is much to reflect upon. We heard from four companies and experienced six learning sessions to explore the frontiers and fundamentals of lean transformation. And it is always exciting to get together with 440 like-minded, lean-thinking individuals.
Apologies again to the many of you weren’t able to attend since the event sold out so early. You should know, however, we do not plan to expand the size of the event in the future. We want to continue to limit it to a relatively intimate size to enable and encourage interaction, dialogue, debate, networking, and casual socializing.
I do have good news for those of you who missed the event. One highlight was the debut of Jim Womack’s new book, Gemba Walks, which is now available to you.
Many have asked what Jim has been up to since stepping down as CEO of LEI. The answer is that Jim has remained as busy as ever and, what’s more, now his letters are back, in different form. In Gemba Walks, Jim compiles many of his eLetters, written between 2001 and 2011. Gemba Walks is more than a mere compilation, however, with some new content and new commentary for each letter, edited and grouped by topic. As a reader, I can tell you that the experience of reading the letters in this new context is surprising, refreshing, enlightening, and, well, fun. It’s always an enjoyable romp to join Jim on a walk through a gemba and Gemba Walks provides the next best thing to being there.
These three principles of lean leadership are well-known: Go see, ask why, and show respect. You know that to “go see” is fundamental to all lean thinking and acting. But, what does that actually mean? How do we go see?”
Gemba Walks reveals how Jim’s thinking has evolved over time as a result of observing what happens as lean has taken root in companies around the world over time. New successes lead inevitably to new, and better problems, for lean practitioners. This book documents how companies are continuing to press forward.
In my foreword, I recall the first time I had a chance to visit a gemba with Jim, when I was still a Toyota employee:
“The first time I walked a gemba with Jim was on the plant floor of a Toyota supplier. Jim was already famous as the lead author of The Machine That Changed the World; I was the senior American manager at the Toyota Supplier Support Center. My Toyota colleagues and I were a bit nervous about showing our early efforts of implementing TPS at North American companies to “Dr. James P. Womack.” We had no idea of what to expect from this famous academic researcher.
“My boss was one of Toyota’s top TPS experts, Mr. Hajime Ohba. We rented a small airplane for the week so we could make the most of our time, walking the gemba of as many worksites as possible. As we entered the first supplier, walking through the shipping area, Mr. Ohba and I were taken aback as Jim immediately observed a work action that spurred a probing question. The supplier was producing components for several Toyota factories. They were preparing to ship the exact same component to two different destinations. Jim immediately noticed something curious. Furrowing his brow while confirming that the component in question was indeed exactly the same in each container, Jim asked why parts headed to Ontario were packed in small returnable containers, yet the same components to be shipped to California were in a large corrugated box. This was not the type of observation we expected of an academic visitor in 1993.
“Container size and configuration was the kind of simple (and seemingly trivial) matter that usually eluded scrutiny, but that could in reality cause unintended and highly unwanted consequences. It was exactly the kind of detail that we were encouraging our suppliers to focus on. In fact, at this supplier in particular, the different container configurations had recently been highlighted as a problem. And, in this case, the fault of the problem was not with the supplier but with the customer – Toyota! Different requirements from different worksites caused the supplier to pack off the production line in varying quantities (causing unnecessary variations in production runs), to prepare and hold varying packaging materials (costing money and floor space), and ultimately resulted in fluctuations in shipping and, therefore, production requirements. The trivial matter wasn’t as trivial as it seemed.
“We had not been on the floor two minutes when Jim raised this question. Most visitors would have been focused on the product, the technology, the scale of the operation, etc. Ohba-san looked at me and smiled, to say, ‘This might be fun.'” (Click here for a free pdf of the complete foreword.)
Fun it has been. Challenging it has been, too, but always full of learning. Fun and challenging learning it will no doubt continue to be.
I am often asked what book to recommend to start someone down the lean path. From now on, Gemba Walks will be that book. With an overview of tools and theory told through stories and explorations of real events, Gemba Walks invites readers to tackle problems on an immediate and personal level. In so doing, it gives courage for beginners to get started. And for veterans to keep going.
Chairman and CEO
Lean Enterprise Institute, Inc.
Again it is worth noting the attention to detail. I recall a number of occassions where I have challenged customers to address operational differences between facilities (not much different from the situation above). I can say that Toyota was one of the few companies that listened and actually did something about it.
I recognize that benchmarking is not a new concept. In business, we have learned to appreciate the value of benchmarking at the “macro level” through our deliberate attempts to establish a relative measure of performance, improvement, and even for competitor analysis. Advertisers often use benchmarking as an integral component of their marketing strategy.
The discussion that follows will focus on the significance of benchmarking at the “micro level” – the application of benchmarking in our everyday decision processes. In this context, “micro benchmarking” is a skill that we all possess and often take for granted – it is second nature to us. I would even go so far as to suggest that some decisions are autonomous.
With this in mind, I intend to take a slightly different, although general, approach to introduce the concept of “micro benchmarking”. I also contend that “micro benchmarking” can be used to introduce a new level of accountability to your organization.
Human Resources – The Art of Deception Interviews and Border Crossing
Micro benchmarking can literally occur “in the moment.” The interview process is one example where “micro benchmarking” frequently occurs. I recently read an article titled, “Reading people: Signs border guards look for to spot deception“, and made particular note of the following advice to border crossing agents (emphasis added):
Find out about the person and establish their base-line behavior by asking about their commute in, their travel interests, etc. Note their body language during this stage as it is their norm against which all ensuing body language will be compared.
The interview process, whether for a job or crossing the border, represents one example where major (even life changing) decisions are made on the basis of very limited information. As suggested in the article, one of the criteria is “relative change in behavior” from the norm established at the first greeting. Although the person conducting a job interview may have more than just “body language” to work with, one of the objectives of the interview is to discern the truth – facts from fiction.
Obviously, the decision to permit entry into the country, or to hire someone, may have dire consequences, not only for the applicant, but also for you, your company, and even the country. Our ability to benchmark at the micro level may be one of the more significant discriminating factors whereby our decisions are formulated.
Decisions – For Better or Worse:
Every decision we make in our lives is accompanied by some form of benchmarking. While this statement may seem to be an over-generalization, let’s consider how decisions are actually made. It is a common practice to “weigh our options” before making the final decision. I suggest that every decision we make is rooted against some form of benchmarking exercise. The decision process itself considers available inputs and potential outcomes (consequences):
Better – Worse
Pro’s – Con’s
Advantages – Disadvantages
Life – Death
Success – Failure
Safe – Risk
Decisions are usually intended to yield the best of all possible outcomes and, as suggested by the very short list above, they are based on “relative advantage” or “consequential” thinking processes. At the heart of each of these decisions is a base line reference or “benchmark” whereby a good or presumably “correct” decision can be made.
We have been conditioned to believe (religion / teachings) and think (parents / education / social media / music) certain thoughts. These “belief systems” or perceived “truths” serve as filters, in essence forming the base line or “benchmark” by which our thoughts, and hence our decisions, are processed. Every word we read or hear is filtered against these “micro level” benchmarks.
I recognize that many other influences and factors exist but, suffice it to say, they are still based on a relative benchmark. Unpopular decisions are just one example where social influences are heavily considered and weighed. How many times have we heard, “The best decisions are not always popular ones.” Politicians are known to make the tough and not so popular decisions early on in their term and rely on a waning public memory as the next election approaches – time heals all wounds but the scars remain.
Decisions – Measuring Outcomes
As alluded to in the last paragraph, our decision process may be biased as we consider the potential “reactions” or responses that may result. Politics is rife with “poll” data that somehow sway the decisions that are made. In a similar manner, substantially fewer issues of value are resolved in an election year for fear of a negative voter response.
In essence there are two primary outcomes to every decision, Reactions and Results. The results of a decision are self-explanatory but may be classified as summarized below.
If you are still with me, I suggest that at least two levels of accountability exist:
The process used to arrive at the decision
The results of the decision
In corporations, large and small, executives are often held to account for worse than expected (negative) performance, where results are the primary – and seemingly only – focus of discussion. I contend that positive results that exceed expectations should be subject to the same, if not higher, level of scrutiny.
Better and worse than expected results are both indicative of a lack of understanding or full comprehension of the process or system and as such present an opportunity for greater learning. Predicting outcomes or results is a fundamental requirement and best practice where accountability is an inherent characteristic of company culture.
Toyota is notorious for continually deferring to the most basic measurement model: Planned versus Actual. Although positive (better than expected) results are more readily accepted than negative (worse than expected) results, both impact the business:
Better than expected:
Other potential investments may have been deferred based on the planned return on investment.
Financial statements are understated and affects other business aspects and transactions.
Decision model / process does not fully describe / consider all aspects to formulate planned / predictable results
Decision process to yield actual results cannot be duplicated unless lessons learned are pursued, understood, and the model is updated.
Worse than expected:
Poor / lower than expected return on investment
Extended financial obligations
Negative impact to cash flow / available cash
Lower stakeholder confidence for future investments
Decision model / process does not fully describe / consider all aspects to formulate planned / predictable results
Decision process will be duplicated unless lessons learned are pursued, understood, and the model is updated.
The second level of accountability and perhaps the most important concerns the process or decision model used to arrive at the decision. In either case we want to discern between informed decisions, “educated guesses”, “wishful thinking”, or willful neglect. We can see that individual and system / process level accountabilities exist.
The ultimate objective is to understand “what we were thinking” so we can repeat our successes without repeating our mistakes. This seems to be a reasonable expectation and is a best practice for learning organizations.
Some companies are very quick to assign “blame” to individuals regardless of the reason for failure. These situations can become very volatile and once again are best exemplified in the realm of politics. There tends to be more leniency for individuals where policies or protocol has been followed. If the system is broken, it is difficult to hold individuals to account.
The Accountability Solution – Show Your Work!
So, who is accountable? Before you answer that, consider a person who used a decision model and the results were worse than the model predicted. From a system point of view the person followed standard company protocol. Now consider a person who did not use the model, knowing it was flawed, and the results were better than expected. Both “failures” have their root in the same fundamental decision model.
The accountabilities introduced here however are somewhat different. The person following protocol has a traceable failure path. In the latter case, the person introduced a new “untraceable” method – unless of course the person noted and advised of the flawed model before and not after the fact.
Toyota is one of the few companies I have worked with where documentation and attention to detail are paramount. As another example, standardized work is not intended to serve as a rigid set of instructions that can never be changed. To the contrary, changes are permissible, however, the current state is the benchmark by which future performance is measured and proven. The documentation serves as a tangible record to account for any changes made, for better or worse.
Throughout high school and college, we were always encouraged to “show our work”. Some courses offered partial marks for the method although the final answer may have been wrong. The opportunities for learning here however are greater than simply determining the student’s comprehension of the subject material. To the contrary, it also offers an opportunity for the teacher to understand why the student failed to comprehend the subject matter and to determine whether the method used to teach the material could be improved.
Showing the work also demonstrates where the process break down occurred. A wrong answer could have been due to a complete misunderstanding of the material or the result of a simple mis-entry on a calculator. Why and how we make our decisions is just as important to understanding our expectations.
While the latter situations may be more typical of a macro level benchmark, I suggest that similar checks and balances occur even at the micro level. As mentioned in the premise, some decisions may even be autonomous (snap decisions). Examples of these decisions are public statements that all too often require an apology after the fact. The sentiments for doing so usually include, “I’m sorry, I didn’t know what I was thinking.” I am always amazed to learn that we may even fail to keep ourselves informed of what we’re thinking sometimes.
Admittedly, it has been a while since I checked a shampoo bottle for directions, however, I do recall a time in my life reading: Lather, Rinse, Repeat. Curiously, they don’t say when or how many times the process needs to be repeated.
Perhaps someone can educate me as to why it is necessary to repeat the process at all – other than “daily”. I also note that this is the only domestic “washing” process that requires repeating the exact same steps. Hands, bodies, dishes, cars, laundry, floors, and even pets are typically washed only once per occasion.
The intent of this post is not to debate the effectiveness of shampoo or to determine whether this is just a marketing scheme to sell more product. The point of the example is this: simply following the process as defined is, in my opinion, inherently wasteful of product, water, and time – literally, money down the drain.
Some shampoo companies may have changed the final step in the process to “repeat as necessary” but that still presents a degree of uncertainty and assures that exceptions to the new standard process of “Lather, Rinse, and Repeat as Necessary” are likely to occur.
In the spirit of continuous improvement, new 2-in-1 and even 3-in-1 products are available on the market today that serve as the complete “shower solution” in one bottle. As these are also my products of choice, I can advise that these products do not include directions for use.
Scratching the Surface
As lean practitioners, we need to position ourselves to think outside of the box and challenge the status quo. This includes the manner in which processes and tasks are executed. In other words, we not only need to assess what is happening, we also need to understand why and how.
One of the reasons I am concerned with process audits is that conformance to the prescribed systems, procedures, or “Standard Work” somehow suggests that operations are efficient and effective. In my opinion, nothing could be further from the truth.
To compound matters, in cases where non-conformances are identified, often times the team is too eager to fix (“patch”) the immediate process without considering the implications to the system as a whole. I present an example of this in the next section.
The only hint of encouragement that satisfactory audits offer is this: “People will perform the tasks as directed by the standard work – whether it is correct or not.” Of course this assumes that procedures were based on people performing the work as designed or intended as opposed to documenting existing habits and behaviors to assure conformance.
Examining current systems and procedures at the process level only serves to scratch the surface. First hand process reviews are an absolute necessity to identify opportunities for improvement and must consider the system or process as a whole as you will see in the following example.
Manufacturing – Another Example
On one occasion, I was facilitating a preparatory “process walk” with the management team of a parts manufacturer. As we visited each step of the process, we observed the team members while they worked and listened intently as they described what they do.
As we were nearing the end of the walk through, I noted that one of the last process steps was “Certification”, where parts are subject to 100% inspection and rework / repair as required. After being certified, the parts were placed into a container marked “100% Certified” then sent to the warehouse – ready for shipping to the customer.
When I asked about the certification process, I was advised that: “We’ve always had problems with these parts and, whenever the customer complained, we had to certify them all 100% … ‘technical debate and more process intensive discussions followed here’ … so we moved the inspection into the line to make sure everything was good before it went in the box.”
Sadly, when I asked how long they’ve been running like this, the answer was no different from the ones I’ve heard so many times before: “Years”. So, because of past customer problems and the failure to identify true root causes and implement permanent corrective actions to resolve the issues, this manufacturer decided to absorb the “waste” into the “normal” production process and make it an integral part of the “standard operating procedure.”
To be clear, just when you thought I picked any easy one, the real problem is not the certification process. To the contrary, the real problem is in the “… ‘technical debate and more process intensive discussions followed here’ …” portion of the response. Simply asking about the certification requirement was scratching the surface. We need to …
Get Below the Surface
I have always said that the quality of a product is only as good as the process that makes it. So, as expected, the process is usually where we find the real opportunities to improve. From the manufacturing example above, we clearly had a bigger problem to contend with than simply “sorting and certifying” parts. On a broader scale, the problems I personally faced were two-fold:
The actual manufacturing processes with their inherent quality issues and,
The Team’s seemingly firm stance that the processes couldn’t be improved.
After some discussion and more debate, we agreed to develop a process improvement strategy. Working with the team, we created a detailed process flow and Value Stream Map of the current process. We then developed a Value Stream Map of the Ideal State process. Although we did identify other opportunities to improve, it is important to note that the ideal state did not include “certification”.
I worked with the team to facilitate a series of problem solving workshops where we identified and confirmed root causes, conducted experiments, performed statistical analyses, developed / verified solutions, implemented permanent corrective actions, completed detailed process reviews and conducted time studies. Over the course of 6 months, progressive / incremental process improvements were made and ultimately the “certification” step was eliminated from the process.
We continued to review and improve other aspects of the process, supporting systems, and infrastructure as well including, but not limited to: materials planning and logistics, purchasing, scheduling, inventory controls, part storage, preventive maintenance, redefined and refined process controls, all supported by documented work instructions as required. We also evaluated key performance indicators. Some were eliminated while new ones, such as Overall Equipment Effectiveness, were introduced.
Some of the tooling changes to achieve the planned / desired results were extensive. One new tool was required while major and minor changes were required on others. The real tangible cost savings were very significant and offset the investment / expense many times over. In this case, we were fortunate that new jobs being launched at the plant could absorb the displaced labor resulting from the improvements made.
Every aspect of the process demonstrated improved performance and ultimately increased throughput. The final proof of success was also reflected on the bottom line. In time, other key performance indicators reflected major improvements as well, including quality (low single digit defective parts per million, significantly reduced scrap and rework), increased Overall Equipment Effectiveness (Availability, Performance, and Quality), increased inventory turns, improved delivery performance (100% on time – in full), reduced overtime, and more importantly – improved morale.
I have managed many successful turnarounds in manufacturing over the course of my career and, although the problems we face are often unique, the challenge remains the same: to continually improve throughput by eliminating non-value added waste. Of course, none of this is possible without the support of senior management and full cooperation of the team.
While it is great to see plants that are clean and organized, be forewarned that looks can be deceiving. What we perceive may be far from efficient or effective. In the end, the proof of wisdom is in the result.
I recently published, Urgent -> The Cost of Things Gone Wrong, where I expressed concern for dashboards that are attempting to do too much. In this regard, they become more of a distraction instead of serving the intended purpose of helping you manage your business or processes. To be fair, there are at least two (2) levels of data management that are perhaps best differentiated by where and how they are used: Scorecards and Dashboards.
I prefer to think of Dashboards as working with Dynamic Data. Data that changes in real-time and influences our behaviors similar to the way the dashboard in our cars work to communicate with us as we are driving. The fuel gauge, odometer, two trip meters, tachometer, speedometer, digital fuel consumption (L/100 km), and km remaining are just a few examples of the instrumentation available to me in my Mazda 3.
While I appreciate the extra instrumentation, the two that matter first and foremost are the speedometer and the tachometer (since I have a 5 speed manual transmission). The other bells and whistles do serve a purpose but they don’t necessarily cause me to change my driving behavior. Of note here is that all of the gauges are dynamic – reporting data in real time – while I’m driving.
A Scorecard on the other hand is a periodic view of summary data and from our example may include Average Fuel Consumption, Average Speed, Maximum Speed, Average Trip, Maximum Trip, Total Miles Traveled and so on. The scorecard may also include other items such as driving record / vehicle performance data such as Parking Tickets, Speeding Tickets, Oil Changes, Flat Tires, Emergency and Preventive Maintenance.
Take some time to review your current metrics. What metrics are truly influencing your behaviors and actions? How are you using your metrics to manage your business? Are you reacting to trends or setting them?
It’s been said that, “What gets measured gets managed.” I would add – “to a point.” It simply isn’t practical or even feasible to measure everything. I say, “Measure to manage what matters most”.
Remember to get your free Excel Templates for OEE by visiting our downloads page or the orange widget in the sidebar. You can follow us on twitter as well @Versalytics.
It is inevitable that failures will occur and it is only a matter of time before we are confronted with their effects. Our concern regards our ability to anticipate and respond to failures when they occur. How soon is too soon to respond to a change or shift in the process? Do we shut down the process at the very instant a defect is discovered? How do we know what conditions warrant an immediate response?
The quality of a product is directly dependent on the manufacturing process used to produce it and, as we know all too well, tooling, equipment, and machines are subject to wear, tear, and infinitely variable operating parameters. As a result, it is imperative to understand those process parameters and conditions that must be monitored and to develop effective responses or corrective actions to mitigate any negative direct or indirect effects.
Statistical process control techniques have been used by many companies to monitor and manage product quality for years. Average-Range and Individual-Moving Range charts, to name a few, have been used to identify trends that are indicative of process changes. When certain control limits or conditions are exceeded, production is stopped and appropriate corrective actions are taken to resolve the concern. Typically the corrective actions are recorded directly on the control chart.
Process parameters and product characteristics may be closely correlated, however, few companies make the transition to solely relying on process parameters alone. One reason for this is the lack of available data, more specifically at launch, to establish effective operating ranges for process parameters. While techniques such as Design of Experiments can be used, the limited data set rarely provides an adequate sample size for conclusive or definitive parameter ranges to be determined for long-term use.
Learning In Real-Time
It is always in our best interest to use the limited data that is available to establish a measurement baseline. The absence of extensive history does not exempt us from making “calculated” adjustments to our process parameters. The objective of measuring and monitoring our processes and product characteristics is to learn how our processes are behaving in real-time. In too many cases, however, operating ranges have not evolved with the product development cycle.
Although we may not have established the full operating range, any changes outside of historically observed settings should be cause for review and possibly cause for concern. Again, the objective is to learn from any changes or deviations that are not within the scope of the current operating condition.
A trigger event occurs whenever a condition exceeds established process parameters or operating conditions. This includes failure to follow prescribed or standardized work instructions. Failing to understand why the “new” condition developed, is needed, or must be accepted jeopardizes process integrity and the opportunity for learning may be lost.
Our ability to detect or sense “abnormal” process conditions is critical to maintain effective process controls. A disciplined approach is required to ensure that any deviations from normal operating conditions are thoroughly reviewed and understood with applicable levels of accountability.
An immediate response is required whenever a Trigger Event occurs to facilitate the greatest opportunity for learning. “Cold Case” investigations based on speculation tend to align facts with a given theory rather than determining a theory based solely on the facts themselves.
Recurring variances or previously observed deviations within the normal process may be cause for further investigation and review. As mentioned in previous posts, “Variance – OEE’s Silent Partner” and “OEE in an Imperfect World“, one of our objectives is to reduce or eliminate variance in our processes.
Interactions and Coupling
When we consider the definition of normal operating conditions, we must be cognizant of possible interactions. Two conditions observed during separate events may actually create chaos if the events actually occurred at the same time. I have observed multiple equipment failures where we subsequently learned that two machines on the same electrical grid cycled at the exact same time. One machine continued to cycle without incident while a catastrophic failure occurred on the other.
Although the chance of cycling the machines at the exact same moment was slim and deemed not to be a concern, reality proved otherwise. Note that monitoring each machine separately showed no signs of abnormal operation or excessive power spikes. One of the machines (a welder) was moved to a different location in the plant operating on a separate power grid. No failures were observed following the separation.
Another situation occurred where multiple machines were attached to a common hydraulic system. Under normal circumstances up to 70% of the machines were operating at any given time. On some occasions it was noted that an increase in quality defects occurred with a corresponding decrease in throughput although no changes were made to the machines. In retrospect, the team learned that almost all of the machines (90%) were running. Later investigation showed that the hydraulic system could not maintain a consistent system pressure when all machines were in operation. To overcome this condition, boosters were added to each of the hydraulic drops to stabilize the local pressure at the machine.
To summarize our findings here, we need to make sure we understand the system as a whole as well as the isolated machine specific parameters. Any potential interactions or affects of process coupling must be considered in the overall analysis.
I recommend using a simple reporting system to gather the facts and relevant data. The objective is to gain sufficient data to allow for an effective review and assessment of the trigger condition and to better understand why it occurred.
It is important to note that a trigger event does not automatically imply that product is non-conforming. It is very possible, especially during new product launches, that the full range of operating parameters has not yet been realized. As such, we simply want to ensure that we are not changing parameters arbitrarily without exercising due diligence to ensure that all effects of the change are understood.
After a 10 month investigation into the cause of “Sudden Unintended Acceleration”, the results of the Federal Investigation were finally released on February 8, 2011, stating that no electronic source was found to cause the problem. According to a statement released by Toyota, “Toyota welcomes the findings of NASA and NHTSA regarding our Electronic Throttle Control System with intelligence (ETCS-i) and we appreciate the thoroughness of their review.”
The findings do,however, implicate some form of mechanical failure and do not necessarily rule out driver error. It is foreseeable that a mechanical failure could be cause for concern and was seriously considered as part of Toyota’s initial investigation and findings that also included a concern with floor mats. While the problem is very real, the root cause may still remain to be a mystery and although the timeline for this problem has extended for more than a year, it demonstrates the importance of gathering as much vital evidence as possible as events are unfolding.
A Follow Up to Sustainability
When a product has reached maximum market penetration it becomes vulnerable. According to USA Today, “Activision announced it was cancelling a 2011 release of its massive music series Guitar Hero and breaking up the franchise’s business unit citing profitability as a concern.”
I find it hard to imagine all of the Guitar Hero games now becoming obsolete and eventual trash. The life span of the product has exceeded the company’s ability to support it. This is a sad state of affairs.
Today, February 6, 2011, is Super Bowl 45 (XLV) where the Pittsburgh Steelers meet the Green Bay Packers. Historically, the commercials are just as entertaining as the game itself. I thought this Application made for an interesting “message” delivery service.
We are also proud to announce the launch of our digital paper “Versalytics Today“, featuring articles on lean and related topics from our Selected Followers on Twitter.com. Versalytics Today is updated every twenty-four hours and demonstrates the power of collaboration in the Lean community.
It’s no secret that lean is much more than a set of tools and best practices designed to eliminate waste and reduce variance in our operations. I contend that lean is defined by a culture that embraces the principles on which lean is founded. An engaged lean culture is evidenced by the continuing development and integration of improved systems, methods, technologies, best practices, and better practices. When the principles of lean are clearly understood, the strategy and creative solutions that are deployed become a signature trait of the company itself.
Unfortunately, to offset the effects of the recession, many lean initiatives have either diminished or disappeared as companies downsized and restructured to reduce costs. People who once entered data, prepared reports, or updated charts could no longer be supported and their positions were eliminated. Eventually, other initiatives also lost momentum as further staffing cuts were made. In my opinion, companies that adopted this approach simply attempted to implement lean by surrounding existing systems with lean tools.
Some companies have simply returned to a “back to basics” strategy that embraces the most fundamental principles of lean. Is it enough to be driven by a mission, a few metrics, and simple policy statements or slogans such as “Zero Downtime”, “Zero Defects”, and “Eliminate Waste?” How do we measure our ability to safely produce a quality part at rate, delivered on time and in full, at the lowest possible cost? Regardless of what we measure internally, our stakeholders are only concerned with two simple metrics – Profit and Return on Investment. The cold hard fact is that banks and investors really don’t care what tools you use to get the job done. From their perspective the best thing you can do is make them money! I agree that we are in business to make money.
What does it mean to be lean? I ask this question on the premise that, in many cases, sustainability appears to be dependent on the resources that are available to support lean versus those who are actually running the process itself. As such, “sustainability” is becoming a much greater concern today than perhaps most of us are likely willing to admit. I have always encouraged companies to implement systems where events, data, and key metrics are managed in real-time at the source such that the data, events, and metrics form an integral part of the whole process.
Processing data for weekly or monthly reports may be necessary, however, they are only meaningful if they are an extension of ongoing efforts at shop floor / process level itself. To do otherwise is simply pretending to be lean. It is imperative that data being recorded, the metrics being measured, and the corrective actions are meaningful, effective, and influence our actions and behaviors.
To illustrate the difference between Culture and Tools consider this final thought: A carpenter is still a carpenter with or without hammer and nails.
I have always been impressed by Toyota’s inherent ability to adapt, improve, and embrace change even during the harshest times. This innate ability is a signature trait of Toyota’s culture and has been the topic of intense study and research for many years.
How is it that Toyota continues to thrive regardless of the circumstances they encounter? While numerous authors and lean practitioners have studied Toyota’s systems and shared best practices, all too many have missed the underlying strategy behind Toyota’s ever evolving systems and processes. As a result, we are usually provided with ready to use solutions, countermeasures, prescriptive procedures, and forms that are quickly adopted and added to our set of lean tools.
The true discovery occurs when we realize that these forms and procedures are the product or outcome of an underlying systemic thought process. This is where the true learning and process transformations take place. In many respects this is similar to an artist who produces a painting. While we can enjoy the product of the artist’s talent, we can only wonder how the original painting appears in the artist’s mind.
Surprisingly, the specific techniques described in the book are not new, however, the manner in which they are used does not necessarily follow conventional wisdom or industry practice. Throughout the book, it becomes evidently clear that the current practices at Toyota are the product of a collection of improvements, each building on the results of previous steps taken toward a seemingly elusive target.
Although we have gleaned and adopted many of Toyota’s best practices into our own operations, we do not have the benefit of the lessons learned nor do we fully understand the circumstances that led to the creation of these practices as we know them today. As such, we are only exposed to one step of possibly many more to follow that may yield yet another radical and significantly different solution.
In simpler terms, the solutions we observe in Toyota today are only a glimpse of the current level of learning. In the spirit of the improvement kata, it stands to reason that everything is subject to change. The one constant throughout the entire process is the improvement kata or routine that is continually practiced to yield even greater improvements and results.
If you or your company are looking for a practical, hands on, proven strategy to sustain and improve your current operations then this book, “Toyota Kata – Managing People For Improvement, Adaptiveness, and Superior Results“, is the one for you. The improvement kata is only part of the equation. The coaching kata is also discussed at length and reveals Toyota’s implementation and training methods to assure the whole company mindset is engaged with the process.
Why are we just learning of this practice now? The answer is quite simple. The method itself is practiced by every Toyota employee at such a frequency that it has become second nature to them and trained into the culture itself. While the tools that are used to support the practice are known and widely used in industry, the system responsible for creating them has been obscure from view – until now.
Learning and practicing the Toyota improvement kata is a strategy for company leadership to embrace. To do otherwise is simply waiting to copy the competition. I have yet to see a company vision statement where the ultimate goal is to be second best.