Tag: Standardized Work

Flawless Execution – “This Is It” – Practice Makes Perfect

We are often encouraged to look beyond our own business models to expand our horizons or to simply gain a different perspective.  Music is one of my personal areas of interest in the outside world and I have learned to appreciate and value the many genres of music that exist today.  As a lead guitar player for a number of bands over the years and a little recording in my studio, I can only imagine the level of commitment required to perform and record professionally.

I was inspired to write this post after watching Michael Jackson’s DVD, “This is it“.  It is impressive to see how everyone is engaged and intimately involved with every nuance of the performance – from the performers themselves to the people working behind the scenes.  Even more amazing was Michael Jackson’s recall of every note and step of the choreography.  Michael provided extensive direction and leadership to assure a world-class performance could be delivered.

What does this have to do with Lean?

At its core, playing music can simply be described as playing the right notes at the right time.  In many respects, music is analogous to many of our manufacturing processes.  Music has a known process rate (beats per minute).  The standardized work or method is the music score that shows what notes to play and when to play them.  Similarly, the choreography serves as standardized work to document each and every step or movement for each performer.  It can be very obvious (and painful) when someone plays the wrong note, sounds a note at the wrong time, or mis-steps.

Knowing that “This is it” was produced from film during the development of the production also exemplifies how video can be used to not only capture the moment but to improve the process along the way.  The film provides the opportunity to review the performance objectively even if you happen to be in it.  You will note that people are much more engaged and become “self-aware” in a radically different way.

Communication + Practice makes Perfect

It is also readily apparent that many hours of rehearsal are required to produce a world-class performance.  Imagine working for days, weeks, months, or even years to produce a two-hour show for all of the world to see.  How much can one person do to refine and perfect the performance?  How much effort would you be willing to expend knowing that literally billions of people may someday be watching you!

As professionals, individual performers are expected to know their respective roles thoroughly.  They are paid for their expertise and ability to perform with high expectations and demanding circumstances.  The purpose of the rehearsal is not to necessarily practice your part as an individual, but rather to exercise your expertise as part of the team.  Each performer must learn their cues from other performers and determine how they relate and fit in to the overall production process.  Rehearsals provide the basis of the team’s communication strategy to assure everyone is on the same page all the time, every time.

Effective Training

Finally, “This is it” demonstrates the importance of training the whole team.  Although individual training may be required, eventually the team must be brought together in its entirety.  A downfall of many business training programs is that often only a select few people from various departments are permitted to attend with the expectation that they will bring what they learned “back to the team”.  One of the most overlooked elements of training is the communication and coordination of activities between team members.  Group breakout sessions attempt to improve interaction among team members, but this can’t replace the reality of working with the team on home turf.  It seems that some companies expect trained professionals to intuitively know how to communicate and interact with each other.  Nothing could be further from the truth if you are looking to develop a high performance team.

Last Words

Imagine what it would be like if we rehearsed our process and material changes with the same persistence and raw determination that performers and athletes in the entertainment and sports world exhibit.  Overall Equipment Efficiency and more specifically Availability may improve beyond our expectations.  Imagine applying the same degree of standardization to tasks that we perform everyday!  As we strive for excellence, our tolerance for anything less diminishes as well.

Flawless execution requires comprehensive planning, communication, training, practice, measurement, reflection, leadership, commitment, and dedication.

It’s time to play some riffs!

Until Next Time – STAY lean!

Advertisements

22 Seconds to Burn – Excel VBA Teaches Lean Execution

Cover of "Excel 2003 Power Programming wi...
Cover via Amazon

Background:

VBA for Excel has once again provided the opportunity to demonstrate some basic lean tenets.  The methods used to produce the required product or solution can yield significant savings in time and ultimately money.  The current practice is not necessarily the best practice in your industry.  In manufacturing, trivial or minute differences in methods deployed become more apparent during mass production or as volume and demand increases.  The same is true for software solutions and both are subject to continual improvement and the relentless pursuit to eliminate waste.

Using Excel to demonstrate certain aspects of Lean is ideal.  Numbers are the raw materials and formulas represent the processes or methods to produce the final solution (or product).  Secondly, most businesses are using Excel to manage many of their daily tasks.  Any extended learning can only help users to better understand the Excel environment.

The Model:

We recently created a perpetual Holiday calendar for one of our applications and needed an algorithm or procedure to calculate the date for Easter Sunday and Good Friday.  We adopted an algorithm found on Wikipedia at http://en.wikipedia.org/wiki/Computus that produces the correct date for Easter Sunday.

In our search for the Easter Algorithm, we found another algorithm that uses a different method of calculation and provides the correct results too.  Pleased to have two working solutions, we initially did not spend too much time thinking about the differences between them.  If both routines produce the same results then we should choose the one with the faster execution time.  We performed a simple time study to determine the most efficient formula.  For a single calculation, or iteration, the time differences are virtually negligible; however, when subjected to 5,000,000 iterations the time differences were significant.

This number of cycles may seem grossly overstated, however, when we consider how many automobiles and components are produced each year then 5,000,000 approaches only a fraction of the total volume.  Taken further, Excel performs thousands of calculations a day and perhaps even as many more times this rate as numbers or data are entered on a spreadsheet.  When we consider the number “calculations” performed at any given moment, the number quickly grows beyond comprehension.

Testing:

As a relatively new student to John Walkenbach’s book, “Excel 2003 Power Programming with VBA“, speed of execution, efficiency, and “Declaring your Variables” have entered into our world of Lean.  We originally created two (2) routines called EasterDay and EasterDate.  We then created a simple procedure to run each function through 5,000,000 cycles.  Again, this may sound like a lot of iterations but computers work at remarkable speeds and we wanted enough resolution to discern any time differences between the routines.

The difference in the time required to execute 5,000,000 cycles by each of the routines was surprising.  We recorded the test times (measured in seconds) for three separate studies as follows:

  • Original EasterDay:  31.34,  32.69,  30.94
  • Original EasterDate:  22.17,  22.28,  22.25

The differences between the two methods ranged from 9.17 seconds to 8.69 seconds.  Expressed in different terms, the duration of the EasterDay routine is 1.39 to 1.46 times longer than EasterDate.  Clearly the original EasterDate function has the better execution speed.  What we perceive as virtually identical systems or processes at low volumes can yield significant differences that are often only revealed or discovered by increased volume or the passage of time.

In the Canadian automotive industry there are at least 5 major OEM manufacturers (Toyota, Honda, Ford, GM, and Chrysler), each producing millions of vehicles a year.  All appear to produce similar products and perform similar tasks; however, the performance ratios for each of these companies are starkly different.  We recognize Toyota as the high velocity, lean, front running company.  We contend that Toyota’s success is partly driven by the inherent attention to detail of processes and product lines at all levels of the company.

Improvements

We decided to revisit the Easter Day calculations or procedures to see what could be done to improve the execution speed.  We created a new procedure called “EasterSunday” using the original EasterDay procedure as our base line.  Note that the original Wikipedia code was only slightly modified to work in VBA for Excel.  To adapt the original Wikipedia procedure to Excel, we replaced the FLOOR function with the INT function in VBA.  Otherwise, the procedure is presented without further revision.

To create the final EasterSunday procedure, we made two revisions to the original code without changing the algorithm structure or the essence of the formulas themselves.  The changes resulted in significant performance improvements as summarized as follows:

  1. For integer division, we replaced the INT (n / d) statements with a less commonly used (or known) “\” integer division operator.  In other words, we used “n \ d” in place of “INT( n / d)” wherever an integer result is required.  This change alone resulted in a gain of 11 seconds.  One word of caution if you plan to use the “\” division operator:  The “n” and “d”  are converted to integers before doing the division.
  2. We declared each of the variables used in the subsequent formulas and gained yet another remarkable 11 seconds.  Although John Walkenbach and certainly many other authors stress declaring variables, it is surprising to see very few published VBA procedures that actually put this to practice.

Results:

The results of our Time Tests appear in the table below.  Note that we ran several timed iterations for each change knowing that some variations in process time can occur.

EasterDay = 31.34375 Original Code uses INT( n / d ) to convert Division Results
EasterSunday = 20.828125 1.  Replaced INT ( n / d) with (n \ d)
EasterDate = 22.28125 Original Code – Alternate Calculation Method
Re-Test to Confirm Timing
EasterDay = 30.9375 Original Code uses INT( n / d ) to convert Division Results
EasterSunday = 20.921875 1.  Replaced INT ( n / d) with (n \ d)
EasterDate = 22.25 Original Code – Alternate Calculation Method
Re-Test to Confirm Timing
EasterDay = 30.90625 Original Code uses INT( n / d ) to convert Division Results
EasterSunday = 21.265625 1.  Replaced INT ( n / d) with (n \ d)
EasterDate = 22.25 Original Code – Alternate Calculation Method
Re-Test to Confirm Timing
EasterDay = 31.078125 Original Code uses INT( n / d ) to convert Division Results
EasterSunday = 9.171875 2.  Variables DECLARED!
EasterDate = 22.1875 Original Code – Alternate Calculation Method
Re-Test to Confirm Timing
EasterDay = 31.109375 Original Code uses INT( n / d ) to convert Division Results
EasterSunday = 9.171875 2.  Variables DECLARED!
EasterDate = 22.171875 Original Code – Alternate Calculation Method

The EasterSunday procedure contains the changes described above.  We achieved a total savings of approximately 22 seconds.  The integer division methods used both yield the same result, however, one is clearly faster than the other.

The gains made by declaring variables were just as significant.  In VBA, undeclared variables default to a “variant” type.  Although variant types are more flexible by definition, performance diminishes significantly. We saved at least an additional 11 seconds simply by declaring variables.  Variable declarations are to VBA as policies are to your company, they define the “size and scope” of the working environment.  Undefined policies or vague specifications create ambiguity and generate waste.

Lessons Learned:

In manufacturing, a 70% improvement is significant; worthy of awards, accolades, and public recognition.  The lessons learned from this example are eight-fold:

  1. For manufacturing, do not assume the current working process is the “best practice”.  There is always room for improvement.  Make time to understand and learn from your existing processes.  Look for solutions outside of your current business or industry.
  2. Benchmarking a current practice against another existing practice is just the incentive required to make changes.  Why is one method better than another?  What can we do to improve?
  3. Policy statements can influence the work environment and execution of procedures or methods.  Ambiguity and lack of clarity create waste by expending resources that are not required.
  4. Improvements to an existing process are possible with results that out perform the nearest known competitor.  We anticipated at least being able to have the two routines run at the similar speeds.  We did not anticipate the final EasterSunday routine to run more than 50% faster than our simulated competitive benchmark (EasterDate).
  5. The greatest opportunities are found where you least expect them.  Learning to see problems is one of the greatest challenges that most companies face.  The example presented in this simple analogy completely shatters the expression, “If it ain’t broke, don’t fix it.”
  6. Current practices are not necessarily best practices and best practices can always be improved.  Focusing on the weaknesses of your current systems or processes can result in a significant competitive edge.
  7. Accelerated modeling can highlight opportunities for improvement that would otherwise not be revealed until full high volume production occurs.  Many companies are already using process simulation software to emulate accelerated production to identify opportunities for improvement.
  8. The most important lesson of all is this:

Speed of Execution is Important >> Thoughtful Speed of Execution is CRITICAL.

We wish you all the best of this holiday season!

Until Next Time – STAY Lean!

Vergence Analytics

At the onset of the Holiday project, the task seemed relatively simple until we discovered that the rules for Easter Sunday did not follow the simple rules that applied to other holidays throughout the year.  As a result we learned more about history, astronomy, and the tracking of time than we ever would have thought possible.

We also learned that Excel’s spreadsheet MOD formula is subject to precision errors and the VBA version of MOD can yield a different result than the spreadsheet version.

We also rediscovered Excel’s Leap Year bug (29-Feb-1900).   1900 was not a leap year.  The leap year bug resides in the spreadsheet version of the date functions.  The VBA date function recognizes that 29-Feb-1900 is not a valid date.

Software Modeling for Standardized Work

The concept of Standard Work is understood in virtually any work environment and is not exclusive to the lean enterprise.  Typically the greater challenge of standardized work is actually preparing an effective document that adequately describes the “work” to be performed.

The objective of standardized work is to provide a documented “method” for completing a sequence of tasks that can be executed to consistently yield a quality product at rate, regardless of the person who is performing the work.  The documentation typically created usually falls short of this expectation.

The Ideal Model for Standardized Work

We would expect to find examples of well documented standardized work at Nuclear Stations, Military Installations, in Aerospace, and many other places where risks are high and operation sequences are critical.  High velocity, lean organizations recognize that a disciplined process approach is the key to discovering opportunities for improvement and to support future “problem” solving activities if required.

Computer programs are perfect models of standardized work in action.  They perform the same tasks day after day, collecting, storing, and processing data.  We have certain performance expectations although we seldom understand the inner workings of the programs themselves.

This is certainly true for the computers we deal with in our personal lives such as mobile phones, instant banking machines, GPS mapping systems, or the many “gaming” programs that people play.  Our interactions with the “program” are limited to the HMI or Human-Machine-Interface and represents only a tiny fraction of the thousands of lines of computer code that are executing the transaction requests in the background.

The Software Development Model

Although few of us may ever write a program, we do understand that every instruction or line of code in a program is critical to the successful execution of the program as a whole.  Every line of code represents a specific instruction, process sequence, or step that must be executed by the computer.  Similarly, standardized work identifies the specific steps that must be followed to successfully complete the task.

Any time a computer system “goes down” or critical error occurs, someone in the IT department is looking for the source of the problem.  The software is typically written to at least provide a hint as to where the problem may be.  In some cases the solution may be as drastic as rebooting the system or as simple as reloading the specific application.

We should be able to perform a similar analysis when a process fails to perform to expectations or when we are confronted with quality issues or other process disruptions or failures.  The ability to consistently repeat a sequence of steps is directly correlated to the quality and level of detail described in the standardized work document.

Aside:  An example from the Quality department:

Gage Repeatability and Reproducibility studies, also know as Gage R&R studies, are often used to validate the effectiveness of a measurement system or method (fixture or equipment) for a specific application.  If the Gage R&R result is less than 10%, the gauge or fixture is deemed to be acceptable; greater than 30% renders the measurement system unusable for the application.    The results can be evaluated statistically to determine or differentiate whether the “problem” is repeatability (equipment variability) or reproducibility (operator variability).

When the measurement system fails to meet the requirements of the application, a significant amount of time and effort is expended to achieve an acceptable result.  The gauging strategy is reviewed, including part / fixture net locations, quantity of net pads and / or pins, net pad and / or pin sizes, clamping sequences, and clamping pressures; all efforts to improve the measurement system.  Instructions are revised and operators are retrained accordingly.

In contrast, we seldom see the same level of time and effort expended to develop, analyze, test and document standardized work at the machine or station where the work is actually being performed.  Although the process may be improved to yield a quality product, the method or work instruction to achieve a consistent result is not adequately described or defined.

Understanding the tasks to be performed and the time required to perform them is essential to determine effective process cycle times (rates) and also to understand where changes to the process may yield improved performance.  This is of particular importance for companies using OEE as a key process metric.

Note:  an indicator that standardized work methods should be reviewed occurs when excuses for poor performance are attributed to a “new operator” or “steep learning curve”.

Extending the Program Model Concept

We can all appreciate the “built-in” or inherent discipline of computers executing thousands of lines of code in the same sequence every time the program runs.  To add complexity to our model, consider the discipline and learning that is necessary to write the code itself.  The software development team must understand the purpose of the program, how it will be used, design and create a user interface, determine programming algorithms to achieve the desired results and functionality, and ultimately they must write the code that will perform these functions.

Anyone who has attempted to write a program, or knows someone who has, will also be familiar with the term “DEBUG”.  There may be at least as many hours spent testing and debugging code as there are when writing it.  Even after hundreds of hours of testing, some “Bugs” still make it into the real world.  Microsoft’s bug laden operating system releases have been the target of Apple Computer advertising campaigns for this very reason.

Some code may function without error when executed in isolation and some bugs may not be discovered until the module is interacting with the program as a whole.  In this regard, it is also important to consider the potential of interactions with other processes when developing standardized work.  Upstream and downstream operations may have a direct impact on the work being performed. 

The software development team must select the programming language that will be used to develop the final code and the individual programmers must also follow and understand the syntax and language protocols.  Although the product of the software development team is the “executable” program that the computer will run, we can be assured that the process for arriving at this final product is also quite rigorous.

Although we never get to see the native or original code, the modules are likely highly optimized, commented thoroughly, and well documented.  These comments are technically “non-value added” steps in the program, however, they usually describe the scope and purpose of the procedure or clarify any code intentions or algorithms.  These comments are valuable when debugging may be required or when the code is subject to future reviews.

The discipline of software development is not too far from the level of discipline that should be in place to develop standardized work.  The quality of standardized work processes would improve dramatically if each sequence was given the same level of scrutiny as a single line of code.

Making Improvements with Standardized Work

You may be wondering how flexibility exists in an environment of extreme discipline and rigid rules.  It is actually the rigidity and discipline that supports or encourages flexibility.  The discipline is in place to encourage managed change events without compromising current process knowledge or levels of understanding.  A well-defined process is much easier to understand and therefore is also easier to analyze for improvement opportunities.  The level of understanding should be such that a quantifiable margin or level of improvement can be predicted.

With reference to our software model, you will appreciate that the efficiency or speed of the program is dependent on the methods or algorithms that programmer used to develop the solution.  How many times have you stared at your computer wondering “what’s taking so long?”  The timing for a simple data sort can vary depending on the method or sorting algorithm chosen by the programmer.

The very language elements or functions that the programmer uses will have a profound effect on program execution time.  Many programmers have developed and use high precision “timing” functions to help optimize their code for efficiency and speed of execution.  Machine language level programmers are likely to know how many clock cycles each instruction requires.

Understanding the “process instructions” at this level creates very specific challenges with predictable outcomes and degree of certainty when changes are considered.  Changing an algorithm is quite different from simply changing a line of code within a specific function.  However, the scope, purpose, and impact of the change can be clearly defined and assessed in advance.

Last Words:

One of the more significant lean developments in manufacturing was the introduction of Quick Changeover and Single Minute Exchange of Dies (SMED).  The setup time reductions that have been achieved are truly remarkable and continue to improve with advances in technology.

When Quick Changeover and SMED programs were first introduced, most companies did not have a defined setup procedure or process.  The most significant effort was spent developing actual setup instructions:  identifying tasks to be completed, determining the sequence of events, who was responsible, and when they could be performed.

Ultimately external and internal setup activities were defined, setup teams were created, specific tasks and sequences of events were assigned and defined, and setup times were reduced from hours to 10 minutes or less.

Standardized Work is a fundamental element of Lean Manufacturing.  As notes are the language of musicians and make for great songs and sounds, so too is Standardized Work to a Lean organization.

Until Next Time – STAY Lean!

Time Studies with your BlackBerry

Performing a time study is relatively easy compared to only few years ago.  The technologies available today allow studies to be conducted quite readily.

Time Studies and OEE (Overall Equipment Effectiveness)

The Performance factor for OEE is based on the Ideal Cycle Time of the process.  For fixed rate processes, the Name-Plate rate may suffice but should still be confirmed.  For other processes such a labour intensive operations, a time study is the only way to determine the true or ideal cycle time.

When measuring the cycle time, we typically use “button to button” timing to mark a complete cycle.  It can be argued that an operator may lose time to retrieve or pack parts or move containers.  Including these events in the gross cycle time will hide these opportunities.  It is better to exclude any events that are not considered to be part of the actual production cycle.

When calculating the Performance factor for Overall Equipment Effectiveness (OEE), the efficiency shortfalls will be noted by the less than 100% performance.  The reasons for this less than optimal level of peformance are attributed to the activities the operator is required to perform other than actually operating the machine or producing parts.

All operator activities and actions should be documented using a standardized operating procedure or standardized work methodology.  This will allow all activities to be captured as opposed to absorbed into the job function.

The BlackBerry Clock – Stopwatch

One of the tools we have used on the “fly” is the BlackBerry Clock’s Stopwatch function.  The stopwatch feature is very simple to use and provides lap time recording as well.

When performing time studies using a traditional stopwatch, being able to keep track of individual cycle times can be difficult.  With the stopwatch function, the history for each “lap” time is retained.  To determine the individual lap time or cycle time, we recommend dividing the total lapsed time by the number of completed cycles (or laps).

The individual lap times are subject to a certain degree of uncertainty or error as there will always be a lead or lag time associated with the pushing of the button on the BlackBerry to signal the completion of a cycle.  Although this margin of error may be relatively small, even with this level of technology, the human element is still a factor for consideration.

Once the time study is complete you can immediately send the results by forwarding them as an E-mail, PIN, or SMS.

The BlackBerry Camera – Video Camera

Another useful tool is the video camera.  Using video to record operations and processes allows for a detailed “step by step” analysis at any time.  This is particularly useful when establishing Standard Operating Procedures or Standardized Work.

Uploading videos and pictures to your computer is as easy as connecting the device to an available USB port.  In a matter of minutes, the data is ready to be used.

Video can also be used to analyze work methods, sequences, and also serves as a valuable problem solving tool.

Until Next Time – STAY Lean!

We are not affiliated with Research In Motion (RIM).  The intent of this post is to simply demonstrate how the technology can be used in the context described and presented.

OEE For Manufacturing

We are often asked what companies (or types of companies) are using OEE as part of their daily operations.  While our focus has been primarily in the automotive industry, we are highly encouraged by the level of integration deployed in the Semiconductor Industry.  We have found an excellent article that describes how OEE among other metrics is being used to sustain and improve performance in the semiconductor industry.

Somehow it is not surprising to learn the semiconductor industry has established a high level of OEE integration in their operations.  Perhaps this is the reason why electronics continue to improve at such a rapid pace in both technology and price.

To get a better understanding of how the semiconductor industry has integrated OEE and other related metrics into their operational strategy, click here.

The article clearly presents a concise hierarchy of metrics (including OEE) typically used in operations and includes their interactions and dependencies.  The semiconductor industry serves as a great benchmark for OEE integration and how it is used as powerful tool to improve operations.

While we have reviewed some articles that describe OEE as an over rated metric, we believe that the proof of wisdom is in the result.  The semiconductor industry is exemplary in this regard.  It is clear that electronics industry “gets it”.

As we have mentioned in many of our previous posts, OEE should not be an isolated metric.  While it can be assessed and reviewed independently, it is important to understand the effect on the system and organization as a whole.

We appreciate your feedback.  Please feel free to leave us a comment or send us an e-mail with your suggestions to leanexecution@gmail.com

Until Next Time – STAY lean!

Benchmarking OEE

Benchmarking Systems:

We have learned that an industry standard or definition for Overall Equipment Effectiveness (OEE) has been adopted by the Semi Conductor Industry and also confirms our approach to calculating and using OEE and other related metrics.

The SEMI standards of interest are as follows:

  • SEMI E10:  Definition and Measurement of Equipment Reliability, Availability, and Maintainability.
  • SEMI E35:  Guide to Calculate Cost of Ownership Metrics.
  • SEMI E58:  Reliability, Availability, and Maintainability Data Collection.
  • SEMI E79:  Definition and Measurement of Equipment Productivity – OEE Metrics.
  • SEMI E116:  Equipment Performance Tracking.
  • SEMI E124:  Definition and Calculation of Overall Factory Efficiency and other Factory-Level Productivity Metrics.

It is important to continually learn and improve our understanding regarding the development and application of metrics used in industry.  It is often said that you can’t believe everything you read (especially – on the internet).  As such, we recommend researching these standards to determine their applicability for your business as well.

Benchmarking Processes:

Best practices and methods used within and outside of your specific industry may bring a fresh perspective into the definition and policies that are already be in place in your organization.  Just as processes are subject to continual improvement, so are the systems that control them.  Although many companies use benchmarking data to establish their own performance metrics, we strongly encourage benchmarking of best practices or methods – this is where the real learning begins.

World Class OEE is typically defined as 85% or better.  Additionally, to achieve this level of “World Class Peformance” the factors for Availability, Performance, and Quality must be at least 90%, 95%, and 99.5% respectively.  While this data may present your team with a challenge, it does little to inspire real action.

Understanding the policies and methods used to measure performance coupled with an awareness of current best practices to achieve the desired levels of  performance will certainly provide a foundation for innovation and improvement.  It is significant to note that today’s most efficient and successful companies have all achieved levels of performance above and beyond their competition by understanding and benchmarking their competitors best practices.  With this data, the same companies went on to develop innovative best practices to outperform them.

A Practical Example

Availablity is typically presented as the greatest opportunity for improvement.  This is even suggested by the “World Class” levels stated above.  Further investigation usually points us to setup / adjustment or change over as one of the primary improvement opportunities.  Many articles and books have been written on Single Minute Exchange of Dies and other Quick Tool Change strategy, so it is not our intent to present them here.  The point here is that industry has identified this specific topic as a significant opportunity and in turn has provided significant documentation and varied approaches to improve setup time.

In the case of improving die changes a variety of techniques are used including:

  • Quick Locator Pins
  • Pre-Staged Tools
  • Rolling Bolsters
  • Sub-Plates
  • Programmable Controllers
  • Standard Pass Heights
  • Standard Shut Heights
  • Quarter Turn Clamps
  • Hydraulic Clamps
  • Magnetic Bolsters
  • Pre-Staged Material
  • Dual Coil De-Reelers
  • Scheduling Sequences
  • Change Over Teams versus Individual Effort
  • Standardized Changeover Procedures

As change over time becomes less of a factor for determining what parts to run and for how long, we can strive reduced inventories and improved preventive maintenance activities.

Today’s Challenge

The manufacturing community has been devastated by the recent economic downturn.  We are challenged to bring out the best of what we have while continuing to strive for process excellence in all facets of our business.

Remember to get your free Excel Templates by visiting our FREE Downloads page.  We appreciate your feedback.  Please leave a comment an email to leanexecution@gmail.com or vergence.consultin@gmail.com

Until Next Time – STAY Lean!

OEE Where do we Measure – Part II

We have stated that policies and procedures will have an impact on your OEE implementation strategy.  One reader commented on Part I of this post stating that “OEE should be measured at the ‘design’ bottleneck process / piece of equipment that sets the pace of the line.”  While this is certainly an effective approach, the question is whether or not company policy or procedure supports the measurement of OEE in this manner.  Nothing is as simple as it looks.  Take this to the boardroom and see what kind of response you get.  We’re flexible.

As such, this becomes yet another consideration for what is being measured, how the data going to be used, and what is the significance of the results.  While we didn’t elude to a multi-series post, the comment was indeed timely.  The risk of not understanding the data could result in other inefficiencies that are built into the process that could mask either upstream or downstream disruptions.

Inventory – Hiding Opportunities

Whenever we think of the “bottleneck”, we instantly turn to the Theory of Constraints.  The objective is to ensure that the bottleneck operation is performing as required – no disruptions.  In many cases, process engineers will anticipate the bottleneck and incorporate buffers or safety stock into the process to minimize the effect of any potential process disruptions.

On one hand, the inventory, whether in the form of off-line storage or internalized, by using a buffer (or part queue), will in essence minimize or eliminate the effects of external disruptions.  On the other hand, there is a premium to be paid to carry the excess inventory as well.

While buffers or part queue’s can serve as a visual indicator of how well the process is performing, assuming the method used to calculate the queue quantities is correct, our previous post was eluding to the fact that many manufacturers incorporate contingency strategies into the process after the fact such as inventory that was not part of the original process design or reworking product on line.

Incorporating a rework station as part of the manufacturing process because the tooling or equipment is not capable of producing a quality part at rate may eventually be absorbed as part of the “normal” or standard operating procedure.  As such, it is important to manage standardized operating procedures in conjunction with Value Stream Maps to avoid degradation from the base line process.

OEE can serve as an isolated diagnostic tool and as a metric to monitor and manage your overall operation.  Company policy should consider how OEE is to be applied.  While most companies manage OEE for all processes, they are typically managed individually.  Many companies also calculate weighted department, plant, and customer driven OEE indices.

Regardless of the OEE index reported, it is important to understand the complexities introduced by product mix and volumes when considering the use of a weighted OEE index.  The variability of the individual OEE factors compounds the understanding of the net OEE index even more.

We have provided FREE Files for you to download and use at your convenience.  A detailed discussion is also provided in our OEE tutorial.  See the “FREE Files” BOX on the sidebar.

We look forward to your comments.  If you prefer, please send an e-mail to leanexecution@gmail.com

We look forward to hearing from you.

Until next time, STAY – lean!