In theory, Employee Opinion Surveys provide a pulse of the workforce and the workplace in general. In practice, they measure the performance of executive leadership and the management team. They serve as a tool to understand what is working and to identify opportunities for improvement.
Unfortunately, collecting and compiling survey data is very time-consuming and only represents a snapshot in time. While the survey data captures the essence of what is occurring, every good leader knows, things can change very quickly – even too quickly, as in times of crisis.
The attitude of Leadership is reflected in the gratitude of their Employees. ~ Redge
Leaders who are actively engaged with their teams are likely to dismiss the need for an employee opinion survey and we would tend to agree with them. The attitude of Leadership is reflected in the gratitude of their employees. The only way to get a real pulse for what is happening is to regularly walk the floor and engage with your teams.
Make the time to take the time to engage with your teams. A regular “walk and talk” will yield more benefits to you and your teams than any survey could ever provide. Acting on their suggestions and offering regular feedback will foster a culture of trust, respect, accountability, integrity, and open communication. For that, your employees will be truly grateful.
Your feedback matters
If you have any comments, questions, or topics you would like us to address, please feel free to leave your comment in the space below or email us at firstname.lastname@example.org or email@example.com. We look forward to hearing from you and thank you for visiting.
We are likely to find as many definitions for leadership as there are leaders. I recently downloaded an excellent app titled “Leadership Development” from Apple’s App Store and this definition of leadership was presented in one of the many videos:
“Leadership is the process of influencing people by providing purpose, direction, and motivation to accomplish the mission and improve the organization.”
While the expression, “You can lead a horse to water, but you can’t make him drink,” may be true for some, true leaders recognize and understand the value of making the horse thirsty enough to want to drink on his own.
Your feedback matters
If you have any questions, comments, or topics you would like us to address, please feel free to contact us by using the comment space below or by sending an email to LeanExecution@Gmail.com. We look forward to hearing from you.
Visual Management is certainly one of the characteristic traits that sets lean organizations apart from all others. The success of Visual Management is predicated on relevant and current data. To be effective, Visual Management must be embraced and utilized by leadership, management, and employees throughout the organization.
I also believe that “Knowledge is Power and Wisdom is Sharing it.” For this reason I highly respect those who are bold enough to put their thoughts in writing for the rest of the world to see. Daniel T. Jones, author of a number of books on lean (Lean Thinking) and Chairman of the Lean Enterprise Academy, is one of those people.
A few days ago, I received this e-mail from Daniel where he presents his thoughts on managing visually.
Learning to See is the starting point for Learning to Act. By making the facts of any situation clearly visible it is much easier to build agreement on what needs to be done, to create the commitment to doing it and to maintain the focus on sustaining it over time.
However what makes visualisation really powerful is that it changes behaviour and significantly improves the effectiveness of working together to make things happen. It changes the perspective from silo thinking and blaming others to focusing on the problem or process and it generates a much higher level of engagement and team-working. This can be seen at many levels on the lean journey. Here is my list, but I am sure you can think of many more.
Standardized work defined by the team as the best way of performing a task makes the work visible, makes the need for training to achieve it visible and establishes a baseline for improvement. Likewise standardized management makes regular visits to the shop floor visible to audit procedures, to review progress and to take away issues to be resolved at a higher level.
Process Control Boards recording the planned actions and what is actually being achieved on a frequent cadence make deviations from the plan visible, so teams can respond quickly to get back on plan and record what problems are occurring and why for later analysis.
Value Stream Mapsmake the end-to-end process visible so everyone understands the implications of what they do for the rest of the value creation process and so improvement efforts can be focused on making the value stream flow in a levelled fashion in line with demand.
Control Rooms or Hubs bringing together information from dispersed Progress Control Boards makes the synchronisation of activities visible along the value stream, defines the rate of demand for supporting value streams, triggers the need to escalate issues and to analyse the root causes of persistent problems.
A3 Reportsmake the thought process visible from the dialogue between senior managers and the author or team, whether they are solving problems, making a proposal or developing and reviewing a plan of action.
Strategy Deploymentmakes the choices visible in prioritising activities, deselecting others and conducting the catch-ball dialogue to turn high level goals into actions further down the organisation.
Finally the Oobeya Room (Japanese for “big room”) makes working together visiblein a project environment. So far it has been used for managing new product development and engineering projects. However organisations like Boeing are realising how powerful it can be in managing projects in the Executive Office (see thepresentation and the podcast by Sharon Tanner).
The Oobeya Room is in my view the key to making all this visualisation effective. It brings together all of the above to define the objectives, to choose the vital few metrics, to plan and frequently review the progress and delays of concurrent work-streams, to decide which issues need escalating to the next level up and to capture the learning for the next project (see the Discussion Paper, presentation and podcastby Takashi Tanaka).
But more importantly it creates the context in which decisions are based on the facts and recorded on the wall, avoiding fudged decisions and prevarication. It also ensures that resource constraints and win-lose situations that can arise between Departments are addressed and resolved so they do not slow the project down.
Reviewing progress and delays on a daily or weekly basis rather than waiting for less frequent gate review meetings leads to much quicker problem solving. Because these stand-up meetings only need to address the deviations from the plan and what to do about them they also make much better use of management time.
In short the Oobeya Room brings all the elements of lean management together. Taken to an extreme visual management can of course itself become a curse. I have seen whole walls wallpapered with often out-of-date information that is not actively being used in day-to-day decision making. Learning how to focus attention on just the right information to make the right decisions in the right way is the way to unlock the real power of visualisation and team-working in the Oobeya Room.
Daniel T Jones
Chairman, Lean Enterprise Academy
P.S. Those who joined us at our Lean Summit last November got a first taste of the power of the Oobeya Room from Sharon Tanner and Takashi Tanaka. For those eager to learn more they will be giving our first hands-on one-day Lean Executive Masterclass on 27 June in Birmingham, and a private session for executive teams on 28 June. There are only 56 places are available on each day so book your place NOW to avoid disappointment – Click Here to download the booking form.
I planned to publish this yesterday but for some reason I felt compelled to wait. I doubt it was fate, but as you will see, Toyota once again managed to serendipitously substantiate my reason for it.
I was originally inspired to write this post based on a recent experience I had at a local restaurant.
After I was seated, I ordered a coffee to start things off. The waitress asked, “Would you like cream or milk with your coffee?” I said, “Just cream please.”
A few minutes later my coffee arrived … accompanied by two creams and three milks. So I wonder, why even ask the question? What part of this was routine? Asking the question or grabbing both milk and cream?
Later, when it was time for a refill, the waitress noted the milk containers neatly stacked beside the saucer and said, “Oh, just cream right?” They were quickly removed and replaced.
How many of us are simply going through the motions – say the right words and do the right things without even thinking? In some cases, we may even do the wrong things, like a bad habit, without thinking – like the waitress in the restaurant.
I think we need to be very concerned when our words and actions are reduced to “habits” or the equivalent of meaningless rhetorical questions. We say, “Hi, how are you?” and expect to hear “Fine” or “OK” – whether or not it’s true. Or worse, we don’t even wait for the answer.
When our daily routines become autonomous they essentially become habits – good or bad. How can you pay attention to the details when they have become engrained into the everyday monotony we call routine?
The devil is in the details …
Of concern here is how much waste our habits generate that we’re not even aware of. In business, finding the waste is actually easier than it looks. The cure on the other hand may be a different story.
Layered process audits, and regular visits to the “front line” can be used to identify and highlight concerns but, as with many companies, these process reviews only represent a snapshot in time. To be effective, they need to be frequent (daily) and thorough.
In manufacturing, process flows, value streams, and standard work are tools we use to define our target operating plan. However, we know from experience that a gap typically exists between planned and actual performance.
The sequence of events typically occur as planned, however, the method of task execution varies from person to person and shift to shift. The primary root cause for this variance can be traced to work instructions that do not definitively describe the detailed actions required to successfully complete the task.
Generic work instructions simply do not work. To be effective, our methods must be specific and detail oriented. General instructions leave too much room for error and in turn become a source of variation in our processes.
Quite often, we develop techniques or “tricks” that make our jobs or tasks easier to perform. Learning to recognize and share those “nuances” may be the discerning factors to achieve improved performance.
Worth Waiting For …
As I mentioned at the start of this article, Toyota somehow manages to make its way into my articles and this one is no exception. Earlier this week, I learned that Ray Tanguay, a local Ontario (Canada) resident, is now one of three new senior managing officers for Toyota worldwide.
The Toronto Start published “Farm boy a Toyota go-to guy” in today’s business section that chronicles Ray Tanguay’s rise to power to become the only top non-Japanese executive in the company.
What caught my attention, aside from being born in a local town here in Ontario, was this quote:
“I like to drill down deep because the devil is always in the details“ – Ray Tanguay, Toyota Senior Managing Officer
The article also describes how Ray Tanguay managed to get the attention of Toyota president Akio Toyoda and the eventual development of a global vision to clearly set out the company’s purpose, long-term direction, and goals for employees.
After summarizing Ray Tanguay’s history, the article concludes …
“A few years later, his attention to detail on the shop floor helped the company win a second assembly plant in nearby Woodstock and thousands of more jobs for Canada’s manufacturing sector.”
I note with great interest, “… on the shop floor …” Perhaps, I should have changed the title to “Opportunity: the Devil is in the details!” I still think we were close.
I recognize that benchmarking is not a new concept. In business, we have learned to appreciate the value of benchmarking at the “macro level” through our deliberate attempts to establish a relative measure of performance, improvement, and even for competitor analysis. Advertisers often use benchmarking as an integral component of their marketing strategy.
The discussion that follows will focus on the significance of benchmarking at the “micro level” – the application of benchmarking in our everyday decision processes. In this context, “micro benchmarking” is a skill that we all possess and often take for granted – it is second nature to us. I would even go so far as to suggest that some decisions are autonomous.
With this in mind, I intend to take a slightly different, although general, approach to introduce the concept of “micro benchmarking”. I also contend that “micro benchmarking” can be used to introduce a new level of accountability to your organization.
Human Resources – The Art of Deception Interviews and Border Crossing
Micro benchmarking can literally occur “in the moment.” The interview process is one example where “micro benchmarking” frequently occurs. I recently read an article titled, “Reading people: Signs border guards look for to spot deception“, and made particular note of the following advice to border crossing agents (emphasis added):
Find out about the person and establish their base-line behavior by asking about their commute in, their travel interests, etc. Note their body language during this stage as it is their norm against which all ensuing body language will be compared.
The interview process, whether for a job or crossing the border, represents one example where major (even life changing) decisions are made on the basis of very limited information. As suggested in the article, one of the criteria is “relative change in behavior” from the norm established at the first greeting. Although the person conducting a job interview may have more than just “body language” to work with, one of the objectives of the interview is to discern the truth – facts from fiction.
Obviously, the decision to permit entry into the country, or to hire someone, may have dire consequences, not only for the applicant, but also for you, your company, and even the country. Our ability to benchmark at the micro level may be one of the more significant discriminating factors whereby our decisions are formulated.
Decisions – For Better or Worse:
Every decision we make in our lives is accompanied by some form of benchmarking. While this statement may seem to be an over-generalization, let’s consider how decisions are actually made. It is a common practice to “weigh our options” before making the final decision. I suggest that every decision we make is rooted against some form of benchmarking exercise. The decision process itself considers available inputs and potential outcomes (consequences):
Better – Worse
Pro’s – Con’s
Advantages – Disadvantages
Life – Death
Success – Failure
Safe – Risk
Decisions are usually intended to yield the best of all possible outcomes and, as suggested by the very short list above, they are based on “relative advantage” or “consequential” thinking processes. At the heart of each of these decisions is a base line reference or “benchmark” whereby a good or presumably “correct” decision can be made.
We have been conditioned to believe (religion / teachings) and think (parents / education / social media / music) certain thoughts. These “belief systems” or perceived “truths” serve as filters, in essence forming the base line or “benchmark” by which our thoughts, and hence our decisions, are processed. Every word we read or hear is filtered against these “micro level” benchmarks.
I recognize that many other influences and factors exist but, suffice it to say, they are still based on a relative benchmark. Unpopular decisions are just one example where social influences are heavily considered and weighed. How many times have we heard, “The best decisions are not always popular ones.” Politicians are known to make the tough and not so popular decisions early on in their term and rely on a waning public memory as the next election approaches – time heals all wounds but the scars remain.
Decisions – Measuring Outcomes
As alluded to in the last paragraph, our decision process may be biased as we consider the potential “reactions” or responses that may result. Politics is rife with “poll” data that somehow sway the decisions that are made. In a similar manner, substantially fewer issues of value are resolved in an election year for fear of a negative voter response.
In essence there are two primary outcomes to every decision, Reactions and Results. The results of a decision are self-explanatory but may be classified as summarized below.
If you are still with me, I suggest that at least two levels of accountability exist:
The process used to arrive at the decision
The results of the decision
In corporations, large and small, executives are often held to account for worse than expected (negative) performance, where results are the primary – and seemingly only – focus of discussion. I contend that positive results that exceed expectations should be subject to the same, if not higher, level of scrutiny.
Better and worse than expected results are both indicative of a lack of understanding or full comprehension of the process or system and as such present an opportunity for greater learning. Predicting outcomes or results is a fundamental requirement and best practice where accountability is an inherent characteristic of company culture.
Toyota is notorious for continually deferring to the most basic measurement model: Planned versus Actual. Although positive (better than expected) results are more readily accepted than negative (worse than expected) results, both impact the business:
Better than expected:
Other potential investments may have been deferred based on the planned return on investment.
Financial statements are understated and affects other business aspects and transactions.
Decision model / process does not fully describe / consider all aspects to formulate planned / predictable results
Decision process to yield actual results cannot be duplicated unless lessons learned are pursued, understood, and the model is updated.
Worse than expected:
Poor / lower than expected return on investment
Extended financial obligations
Negative impact to cash flow / available cash
Lower stakeholder confidence for future investments
Decision model / process does not fully describe / consider all aspects to formulate planned / predictable results
Decision process will be duplicated unless lessons learned are pursued, understood, and the model is updated.
The second level of accountability and perhaps the most important concerns the process or decision model used to arrive at the decision. In either case we want to discern between informed decisions, “educated guesses”, “wishful thinking”, or willful neglect. We can see that individual and system / process level accountabilities exist.
The ultimate objective is to understand “what we were thinking” so we can repeat our successes without repeating our mistakes. This seems to be a reasonable expectation and is a best practice for learning organizations.
Some companies are very quick to assign “blame” to individuals regardless of the reason for failure. These situations can become very volatile and once again are best exemplified in the realm of politics. There tends to be more leniency for individuals where policies or protocol has been followed. If the system is broken, it is difficult to hold individuals to account.
The Accountability Solution – Show Your Work!
So, who is accountable? Before you answer that, consider a person who used a decision model and the results were worse than the model predicted. From a system point of view the person followed standard company protocol. Now consider a person who did not use the model, knowing it was flawed, and the results were better than expected. Both “failures” have their root in the same fundamental decision model.
The accountabilities introduced here however are somewhat different. The person following protocol has a traceable failure path. In the latter case, the person introduced a new “untraceable” method – unless of course the person noted and advised of the flawed model before and not after the fact.
Toyota is one of the few companies I have worked with where documentation and attention to detail are paramount. As another example, standardized work is not intended to serve as a rigid set of instructions that can never be changed. To the contrary, changes are permissible, however, the current state is the benchmark by which future performance is measured and proven. The documentation serves as a tangible record to account for any changes made, for better or worse.
Throughout high school and college, we were always encouraged to “show our work”. Some courses offered partial marks for the method although the final answer may have been wrong. The opportunities for learning here however are greater than simply determining the student’s comprehension of the subject material. To the contrary, it also offers an opportunity for the teacher to understand why the student failed to comprehend the subject matter and to determine whether the method used to teach the material could be improved.
Showing the work also demonstrates where the process break down occurred. A wrong answer could have been due to a complete misunderstanding of the material or the result of a simple mis-entry on a calculator. Why and how we make our decisions is just as important to understanding our expectations.
While the latter situations may be more typical of a macro level benchmark, I suggest that similar checks and balances occur even at the micro level. As mentioned in the premise, some decisions may even be autonomous (snap decisions). Examples of these decisions are public statements that all too often require an apology after the fact. The sentiments for doing so usually include, “I’m sorry, I didn’t know what I was thinking.” I am always amazed to learn that we may even fail to keep ourselves informed of what we’re thinking sometimes.
Our process improvement strategy is founded on the Theory of Constraints where improvement initiatives are supported by lean and six sigma tools. Process disruptions affecting flow and task execution all contribute to variance and the efforts to eliminate or reduce them are evidenced by increased stability, increased throughput over time, and increased profits.
So, our main goal in production is to improve flow by focusing our efforts to reduce and eliminate variation in our processes. This is also the message behind our previous two posts, OEE in an Imperfect World and Variation: OEE’s Silent Partner. The effects of our actions will be reflected by the metrics we have chosen to measure our performance.
The following videos further the cause for the Theory of Constraints and Improving Flow:
Stories can be the best teachers and when the topic is manufacturing, production, or operations, I highly recommend “The Goal”, an industry standard, and the recently released “Velocity“. Both novels present an all too common manufacturing dilemma – resource capacity and scheduling constraints – to teach the Theory of Constraints. Velocity is a continuation of The Goal and expands the discussion to include Lean and Six Sigma.
For additional resources and reading recommendations, visit our Book Page.
The message is simple: Change drives Change. What are your thoughts?
Significant initiatives, including lean, can reach a level of stagnation that eventually cause the project to either lose focus or disappear altogether. Hundreds of books have already been written that reinforce the concept that the company culture will ultimately determine the success or failure of any initiative. A sustainable culture of innovation, entrepreneurial spirit, and continual improvement requires effective leadership to cultivate and develop an environment that supports these attributes.
When launching any new initiative, we tend to focus on the many positive aspects that will result. Failure is seldom placed on the list of possible outputs for a new initiative. We are all quite familiar with the typical Pro’s and Con’s, advantages versus disadvantages, and other comparative analysis techniques such as SWAT > Strengths, Weakness, Alternatives, Threats)
A well defined initiative should address both the benefits of implementation AND the risks to the operation if it is NOT.
Back on Track
The Vision statement is one starting point to re-energize the team. Of course, this assumes that the team actually understands and truly embraces the vision.
Overcoming Road Blocks
The Charter: Challenge the team to create and sign up to a charter that clearly defines the scope and expectations of the project. The team should have clearly defined goals followed by an effective implementation / integration plan. The charter should not only describe the “Achievements” but also the consequences of failure. Be clear with the expectations: Annual Savings of $xxx,xxx by Eliminating “Task A – B – C”, Reducing Inventory by “xx” days, and by reducing lead times by “xx” days.
Defining Consequences: Competitive pricing compromised and will lead to loss of business. This could be rephrased using the model expression: We must do “THIS” or else “THIS”. It has been said that the pain of change must be less than the pain of remaining the same. If not, the program will surely fail.
The Plan: An effective implementation strategy requires a time line that includes reporting gates, key milestones, and the actual events or activities required. The time line should be such that momentum is sustained. If progress suggests that the program is ahead of schedule, revise timings for subsequent events where possible. Extended “voids” or lags in event timing can reduce momentum and cause the team to disengage.
Focus: Often times, we are presented with multiple options to achieve the desired results. An effective decision making process is required to reduce choices or to create a hybrid solution that encompasses several options. The decision process must result in a single final solution.
Consequences: As mentioned earlier, a list of consequences should become part of the Charter process as well. Failure suggests that a desired expectation will not be realized. It is not enough to simply return to “the way it was”. The indirect implication is that every failure becomes a learning experience for the next attempt. In other words, we learn from our failures and stay committed to the course of the charter.
Almost all software programs are challenged to sort data. We don’t really think about the “method” that is used. We just wait for the program to do it’s task and wait for the results to appear. At some time, the software development team must have chosen a certain method, also known as an algorithm, to sort the data.
We were recently challenged in a similar situation to decide which sort method would be best suited for the application. You may be surprised to learn that there are many different sorting algorithms available such as:
This is certainly quite a selection and more methods are certain to exist. Each method has it’s advantages and disadvantages. Some sorting methods require more computer memory, some are stable, others are not. Our goal was to create a sorted list without duplicates. We considered adding elements and maintaining a sorted “duplicate free” list in real-time. We also considered reading all the data first and sorting the data after the fact.
The point is that of the many available options, one solution will eventually be adopted by the team. Using the “wrong” sorting method could result in extremely slow performance and frustrated users. In this case the users of the system may abandon a solution that they themselves are not a part of creating. While a buble sort may produce the intended result, it is usually not the most efficient.
Another aspect of effective development is to document the analysis process that was used to arrive at the final solution. In our example, we could run comparative timing and computer resource requirements to determine which solution is most suitable to the application. Some algorithms work better on “nearly sorted” lists versus others that work better with “randomly ordered” data.
Engage the Team: The team should be represented by multiple disciplines or departments within the organization. Using the simple example from above, the development team may create a working solution that is later abandoned by the ultimate users of the system due to it’s poor performance. The charter should be very clear on the desired expectations and performance criteria of the final solution.
Creating a model or prototype to represent the solution is common place. This minimizes the time and resources expended before arriving at the final solution for implemention.
Vision: Leadership must continue to focus beyond the current steps. A project or program is not the means to an end. Rather it should be viewed as the foundation for the next step of the journey. Lean, like any other initiative, is an evolutionary process. Lean is not defined by a series of prescriptions and formulas. The pursuit and elimination of waste is a mission that can be achieved in many different ways.
Management / Review
Regular management reviews should be part of the overall strategy to monitor progress and more so to determine whether there are any impediments to a successful outcome. The role of leadership is to provide direction to eliminate or resolve the road blocks and to keep the team on track.
Breaking Through Paralysis
The objective is clear – we need to keep the initiative moving and also learn to identify when and why the initiative may have stopped. Running a business is more than just having good intentions. We must be prudent in our execution to efficiently and effectively achieve the desired results.
Rife with the typical political rhetoric that accompanies any change process, you will find a truly intriguing story that discusses how to overcome these challenges and what it can mean to set aside personal agendas and theories for the greater good of the company. Velocity also demonstrates how prescriptive strategies can become an impediment to finding new solutions to solve the problem at hand.
Business novels provide a unique self-paced learning opportunity by teaching new concepts that otherwise may be difficult to explain or appreciate in a formal classroom setting. The story line helps to deepen our understanding and expectations of the concepts all the while improving our ability to retain the information.
Velocity is a great read and, like The Goal, should be mandatory reading for every one involved in manufacturing.
Managing performance on any scale requires some form of measurement. These measurements are often summarized into a single result that is commonly referred to as a metric. Many businesses use tools such as dashboards or scorecards to present a summary or combination of multiple metrics into a single report.
While these reports and charts can be impressive and are capable of presenting an overwhelming amount of data, we must keep in mind what we are measuring and why. Too many businesses are focused on outcome metrics without realizing that the true opportunity for performance improvement can be found at the process level itself.
The ability to measure and manage performance at the process level against a target condition is the strategy that we use to strive for successful outcomes. To put it simply, some metrics are too far removed from the process to be effective and as such cannot be translated into actionable terms to make a positive difference.
Overall Equipment Effectiveness or OEE is an excellent example of an outcome metric that expresses how effectively equipment is used over time as percentage. To demonstrate the difference between outcome and process level metrics, let’s take a deeper look at OEE. To be clear, OEE is an outcome metric. At the plant level, OEE represents an aggregate result of how effectively all of the equipment in the plant was used to produce quality parts at rate over the effective operating time. Breaking OEE down into the individual components of Availability, Performance, and Quality may help to improve our understanding of where improvements can be made, but still does not serve to provide a specific direction or focus.
At the process level, Overall Equipment Effectiveness is a more practical metric and can serve to improve the operation of a specific work cell where a specific part number is being manufactured. Clearly, it is more meaningful to equate Availability, Performance, and Quality to specific process level measurements. We can monitor and improve very specific process conditions in real time that have a direct impact on the resulting Overall Equipment Effectiveness. A process operating below the standard rate or producing non-conforming products or can immediately be rectified to reverse a potentially negative result.
This is not to say that process level metrics supersede outcome metrics. Rather, we need to understand the role that each of these metrics play in our quest to achieve excellence. Outcome metrics complement process level metrics and serve to confirm that “We are making a difference.” Indeed, it is welcome news to learn that process level improvements have translated into plant level improvements. In fact, as is the case with OEE, the process level and outcome metrics can be synonymous with a well executed implementation strategy.
I recommend using Overall Equipment Effectiveness throughout the organization as both a process level and an outcome level metric. The raw OEE data at the process level serves as a direct input to the higher level “outcome” metrics (shift, department, plant, company wide). As such, the results can be directly correlated to specific products and / or processes if necessary to create specific actionable steps.
So, you may be asking, “What are Killer Metrics?” Hint: To Measure ALL is to Manage NONE. Choose your metrics wisely.
As a core metric, Overall Equipment Effectiveness or OEE has been adopted by many companies to improve operations and optimize the capacity of existing equipment. Having completed several on site assessments over the past few months we have learned that almost all organizations are measuring performance and quality in real-time, however, the availability component of OEE is still a mystery and often misunderstood – specifically with regard to Set Up or Tool Changes.
We encourage you to review the detailed discussion of down time in our original posts “Calculating OEE – The Real OEE Formula With Examples” and “OEE, Down time, and TEEP” where we also present methods to calculate both OEE and TEEP. The formula for Overall Equipment Effectiveness is simply stated as the product of three (3) elements: Availability, Performance, and Quality. Of these elements, availability presents the greatest opportunity for improvement. This is certainly true for processes such as metal stamping, tube forming, and injection molding, to name a few, where tool changes are required to switch from one product or process to another.
Set up or change over time is defined as the amount of time required to change over the process from the last part produced to the first good part off the next process. We have learned that confusion exists as to whether this is actually planned down time as it is an event that is known to occur and is absolutely required if we are going to make more than one product in a given machine.
Planned down time is not included in the Availability calculation. As such, if change over time is considered as a planned event, the perceived availability would inherently improve as it would be excluded from the calculation. Of course, the higher availability is just an illusion as the lost time was still incurred and the machine was not available to run production.
If we could change a process at the flip of a switch, set up time would be a non-issue and we could spend our time focusing on other improvement initiatives. While some processes do require extensive change over time, there is always room for improvements. This is best exemplified by the metal stamping industry where die changes literally went from Hours to Minutes.
To remain competitive and to increase the available capacity, many companies quickly adopted SMED (Single Minute Exchange of Dies) initiatives after recognizing that significant production capacity is being lost due to extensive change over times. Overtime through extended shifts and capital for new equipment is also reduced as capacity utilization improves.
Significantly reduced inventories can also be realized as product change overs become less of a concern and also provide greater flexibility to accommodate changes in customer demand in real-time. Significantly increased Inventory Turns will also be realized in conjunction with net available cash from operations.
Redefining Down Time
The return on investment for Quick Tool Change technologies is relatively short and the benefits are real and tangible as demonstrated through the metrics mentioned above. Rather than attempt to categorize down time as either planned or unplanned, consider whether the activity being performed is impeding the normal production process or can be considered as an activity required for continuing production.
We prefer to classify down time as either direct or indirect. Any down time such as Set Up, Material Changes, Equipment Breakdowns, Tooling Adjustments, or other activity that impedes production is considered DIRECT down time. Indirect down time applies to events such as Preventive Maintenance, Company Meetings, or Scheduled IDLE Time. These events are indeed PLANNED events where the machine or process is NOT scheduled to run.
Redefine the Objective
Set up or change over time is often the subject of much heated debate and tends to create more discussion than is necessary. The reason for this is simple. Corporate objectives are driven by metrics that measure performance to achieve a specific goal.
Unfortunately, in the latter case, the objectives are translated into personal performance concerns for those involved in the improvement process. Rather than making real improvements, the tendency is to rationalize the current performance levels and to look for ways to revise the definition that creates the perception of poor performance. Since availability does not include planned down time, many attempts are made to exclude certain down time events, such as set up time, to create a better OEE result than was actually achieved.
Attempts to rationalize poor performance inhibits our ability to identify opportunities for improvement. From a similar perspective, we should also be prudent with. and cognizant of, the time allotted for “planned” events.
It is for this reason that some companies have resorted to measuring TEEP based on a 24 hour day. In many respects, TEEP eliminates all uncertainty with regard to availability since you are measured on the ability to produce a quality part at rate. As such, our mission is simple – “To Safely Produce a Quality Part At Rate, Delivered On Time and In Full”. Any activity that detracts from achieving or exceeding this mission is waste.
Remember to get your OEE spreadsheets at no charge from our Free Downloads Page or Free Downloads Box in the sidebar. They can be easily and readily customized for your specific process or application.
Please feel free to send your comments, suggestions, or questions to Support@VergenceAnalytics.com