Category: Standardized Work

OEE and Human Effort

A girl riveting machine operator at the Dougla...
Image by The Library of Congress via Flickr

I was recently asked to consider a modification to the OEE formula to calculate labour versus equipment effectiveness.  This request stemmed from the observation that some processes, like assembly or packing operations, may be completely dependent on human effort.  In other words, the people performing the work ARE the machine.

I have observed situations where an extra person was stationed at a process to assist with loading and packing of parts so the primary operator could focus on assembly alone.  In contrast, I have also observed processes running with fewer operators than required by the standard due to absenteeism.

In other situations, personnel have been assigned to perform additional rework or sorting operations to keep the primary process running.  It is also common for someone to be assigned to a machine temporarily while another machine is down for repairs.  In these instances, the ideal number of operators required to run the process may not always be available.

Although the OEE Performance factor may reflect the changes in throughput, the OEE formula does not offer the ability to discern the effect of labour.  It may be easy to recognize where people have been added to an operation because performance exceeds 100%.  But what happens when fewer people have been assigned to an operation or when processes have been altered to accommodate additional tasks that are not reflected in the standard?

Based on our discussion above, it seems reasonable to consider a formula that is based on Labour Effort.  Of the OEE factors that help us to identify where variances to standard exist, the number of direct labour employees should be one of them. At a minimum, a new cycle time should be established based on the number of people present.

OEE versus Financial Measurement

Standard Cost Systems are driven by a defined method or process and rate for producing a given product. Variances in labour, material, and / or process will also become variances to the standard cost and reflected as such in the financial statements. For this reason, OEE data must reflect the “real” state of the process.

If labour is added (over standard) to an operation to increase throughput, the process has changed. Unless the standard is revised, OEE results will be reportedly higher while the costs associated with production may only reflect a minimal variance because they are based on the standard cost. We have now lost our ability to correlate OEE data with some of our key financial performance indicators.

Until Next Time – STAY lean!

Vergence Analytics

Advertisements

Foundations of Failure

The problem with many of the problems we find ourselves having to contend with at any given time is that we learn of their existence after the fact (when the damage is done) or after discovering that the results we were looking for simply didn’t materialize.  As we learned from Michael A. Roberto’s book, “Know What You Don’t Know“, there are a number of reasons why problems don’t surface until after the fact.  I highly recommend reading “Know What You Don’t Know” as well Steven J. Spear’s “Chasing the Rabbit” as both books present numerous examples and extensive research that span across a wide variety manufacturing and service industries.

In many cases, the pathology of a given problem reveals that much of the information surrounding a given failure, or series of failures, is “common” knowledge.  In isolation, many of the contributing factors appear to be insignificant or irrelevant.  However, when we review all of the “insignificant” bits and pieces of evidence as part of the whole, we discover that it is these “pieces” that make the puzzle complete.  This was certainly the case with 9-11, Three Mile Island, The Challenger, and perhaps even the most recent economic collapse.

The expression, “If it ain’t broke, don’t fix it”, comes to mind as I think of problem solving in the general sense.  Even today, we find this philosophy is embedded deep within the culture of many corporations.  In some cases, the designs are so fragile, that any deviation from normal or intended use may result in failure.  Broken plates and glasses remind us that they are not intended to be dropped – either intentionally or by accident.  We are also painfully aware that the absence of past failures does not automatically exclude or make the process or product immune from future failure.  There are too many examples of failure where past successes are suddenly shattered by a catastrophic failure.

Of course, many products are subject to numerous hours of testing in varying degrees of severity and exposure limits.  Yet somehow, these tests still do not capture all of the possible failure modes that are observed in practice.  Too many product recalls are evidence of our inability to anticipate the vast array of problems that continue to haunt manufacturer’s around the globe.  Just maybe the the “If it ain’t broke” expression needs a little rework itself.  If it ain’t broke please break it – or at least try!  Computer hackers around the world having been giving Microsoft and other major corporations their fair share of problems as they continually find and develop “new” ways to break into very sophisticated and high tech systems.

Our Foundations of Failure model is based on the premise that every failure has a known and identifiable root cause.  The challenge for today’s companies is to learn how to identify problems before the product makes it to market or the process is released to manufacturing for production.  The objective is to instill an innate ability to constructively critique your concepts and designs to identify and anticipate the “What if …” scenarios that your product or service may be subject to.

Perhaps an even greater skill to be learned is to identify and anticipate how the product or process may be used or abused – with or without intent or malice.  From this perspective, lean manufacturing principles and standardized work can certainly help us to map our road to success.  Technically, if the ideal process and it’s inherent steps are performed as specified, then any deviations from the prescribed process or design are subject to a system breakdown or product failure.  As discussed in “Chasing the Rabbit“, this was (and is) certainly the case for the US Nuclear Submarines.

Are your system, process, and product specifications documented to the extent that deviations from their intended purpose or function can be, or are, readily identified?  Is it even possible to forecast or anticipate every possible failure mode?  Is it fair to suggest that prescribing a solution to a problem suggests that the original scope of the problem was or is fully understood?

As we have learned from the numerous failures in our financial markets and the collapse of many high profile businesses and companies around the globe, common symptoms and effects of failure may be the result of radically different root causes:  ignorance, negligence, willful misconduct, and even fraud.  We need to implement systems and processes that are robust and assure our future successes are built on solid fundamental business practices.  When the foundation is faulty, the entire business enterprise is at risk.

In summary, the first step is the most critical step.  The first few steps of any new initiative, process, product, or service, form the foundation of all decisions that follow.  Just as a building requires a solid foundation, so do our future successes.  I recall a little sign that was posted in a retail store that read as follows:

Lovely to look at,
Lovely to Hold,
But if you drop it …
Consider it Sold!

Until Next Time – STAY lean!

Flawless Execution – “This Is It” – Practice Makes Perfect

We are often encouraged to look beyond our own business models to expand our horizons or to simply gain a different perspective.  Music is one of my personal areas of interest in the outside world and I have learned to appreciate and value the many genres of music that exist today.  As a lead guitar player for a number of bands over the years and a little recording in my studio, I can only imagine the level of commitment required to perform and record professionally.

I was inspired to write this post after watching Michael Jackson’s DVD, “This is it“.  It is impressive to see how everyone is engaged and intimately involved with every nuance of the performance – from the performers themselves to the people working behind the scenes.  Even more amazing was Michael Jackson’s recall of every note and step of the choreography.  Michael provided extensive direction and leadership to assure a world-class performance could be delivered.

What does this have to do with Lean?

At its core, playing music can simply be described as playing the right notes at the right time.  In many respects, music is analogous to many of our manufacturing processes.  Music has a known process rate (beats per minute).  The standardized work or method is the music score that shows what notes to play and when to play them.  Similarly, the choreography serves as standardized work to document each and every step or movement for each performer.  It can be very obvious (and painful) when someone plays the wrong note, sounds a note at the wrong time, or mis-steps.

Knowing that “This is it” was produced from film during the development of the production also exemplifies how video can be used to not only capture the moment but to improve the process along the way.  The film provides the opportunity to review the performance objectively even if you happen to be in it.  You will note that people are much more engaged and become “self-aware” in a radically different way.

Communication + Practice makes Perfect

It is also readily apparent that many hours of rehearsal are required to produce a world-class performance.  Imagine working for days, weeks, months, or even years to produce a two-hour show for all of the world to see.  How much can one person do to refine and perfect the performance?  How much effort would you be willing to expend knowing that literally billions of people may someday be watching you!

As professionals, individual performers are expected to know their respective roles thoroughly.  They are paid for their expertise and ability to perform with high expectations and demanding circumstances.  The purpose of the rehearsal is not to necessarily practice your part as an individual, but rather to exercise your expertise as part of the team.  Each performer must learn their cues from other performers and determine how they relate and fit in to the overall production process.  Rehearsals provide the basis of the team’s communication strategy to assure everyone is on the same page all the time, every time.

Effective Training

Finally, “This is it” demonstrates the importance of training the whole team.  Although individual training may be required, eventually the team must be brought together in its entirety.  A downfall of many business training programs is that often only a select few people from various departments are permitted to attend with the expectation that they will bring what they learned “back to the team”.  One of the most overlooked elements of training is the communication and coordination of activities between team members.  Group breakout sessions attempt to improve interaction among team members, but this can’t replace the reality of working with the team on home turf.  It seems that some companies expect trained professionals to intuitively know how to communicate and interact with each other.  Nothing could be further from the truth if you are looking to develop a high performance team.

Last Words

Imagine what it would be like if we rehearsed our process and material changes with the same persistence and raw determination that performers and athletes in the entertainment and sports world exhibit.  Overall Equipment Efficiency and more specifically Availability may improve beyond our expectations.  Imagine applying the same degree of standardization to tasks that we perform everyday!  As we strive for excellence, our tolerance for anything less diminishes as well.

Flawless execution requires comprehensive planning, communication, training, practice, measurement, reflection, leadership, commitment, and dedication.

It’s time to play some riffs!

Until Next Time – STAY lean!

22 Seconds to Burn – Excel VBA Teaches Lean Execution

Cover of "Excel 2003 Power Programming wi...
Cover via Amazon

Background:

VBA for Excel has once again provided the opportunity to demonstrate some basic lean tenets.  The methods used to produce the required product or solution can yield significant savings in time and ultimately money.  The current practice is not necessarily the best practice in your industry.  In manufacturing, trivial or minute differences in methods deployed become more apparent during mass production or as volume and demand increases.  The same is true for software solutions and both are subject to continual improvement and the relentless pursuit to eliminate waste.

Using Excel to demonstrate certain aspects of Lean is ideal.  Numbers are the raw materials and formulas represent the processes or methods to produce the final solution (or product).  Secondly, most businesses are using Excel to manage many of their daily tasks.  Any extended learning can only help users to better understand the Excel environment.

The Model:

We recently created a perpetual Holiday calendar for one of our applications and needed an algorithm or procedure to calculate the date for Easter Sunday and Good Friday.  We adopted an algorithm found on Wikipedia at http://en.wikipedia.org/wiki/Computus that produces the correct date for Easter Sunday.

In our search for the Easter Algorithm, we found another algorithm that uses a different method of calculation and provides the correct results too.  Pleased to have two working solutions, we initially did not spend too much time thinking about the differences between them.  If both routines produce the same results then we should choose the one with the faster execution time.  We performed a simple time study to determine the most efficient formula.  For a single calculation, or iteration, the time differences are virtually negligible; however, when subjected to 5,000,000 iterations the time differences were significant.

This number of cycles may seem grossly overstated, however, when we consider how many automobiles and components are produced each year then 5,000,000 approaches only a fraction of the total volume.  Taken further, Excel performs thousands of calculations a day and perhaps even as many more times this rate as numbers or data are entered on a spreadsheet.  When we consider the number “calculations” performed at any given moment, the number quickly grows beyond comprehension.

Testing:

As a relatively new student to John Walkenbach’s book, “Excel 2003 Power Programming with VBA“, speed of execution, efficiency, and “Declaring your Variables” have entered into our world of Lean.  We originally created two (2) routines called EasterDay and EasterDate.  We then created a simple procedure to run each function through 5,000,000 cycles.  Again, this may sound like a lot of iterations but computers work at remarkable speeds and we wanted enough resolution to discern any time differences between the routines.

The difference in the time required to execute 5,000,000 cycles by each of the routines was surprising.  We recorded the test times (measured in seconds) for three separate studies as follows:

  • Original EasterDay:  31.34,  32.69,  30.94
  • Original EasterDate:  22.17,  22.28,  22.25

The differences between the two methods ranged from 9.17 seconds to 8.69 seconds.  Expressed in different terms, the duration of the EasterDay routine is 1.39 to 1.46 times longer than EasterDate.  Clearly the original EasterDate function has the better execution speed.  What we perceive as virtually identical systems or processes at low volumes can yield significant differences that are often only revealed or discovered by increased volume or the passage of time.

In the Canadian automotive industry there are at least 5 major OEM manufacturers (Toyota, Honda, Ford, GM, and Chrysler), each producing millions of vehicles a year.  All appear to produce similar products and perform similar tasks; however, the performance ratios for each of these companies are starkly different.  We recognize Toyota as the high velocity, lean, front running company.  We contend that Toyota’s success is partly driven by the inherent attention to detail of processes and product lines at all levels of the company.

Improvements

We decided to revisit the Easter Day calculations or procedures to see what could be done to improve the execution speed.  We created a new procedure called “EasterSunday” using the original EasterDay procedure as our base line.  Note that the original Wikipedia code was only slightly modified to work in VBA for Excel.  To adapt the original Wikipedia procedure to Excel, we replaced the FLOOR function with the INT function in VBA.  Otherwise, the procedure is presented without further revision.

To create the final EasterSunday procedure, we made two revisions to the original code without changing the algorithm structure or the essence of the formulas themselves.  The changes resulted in significant performance improvements as summarized as follows:

  1. For integer division, we replaced the INT (n / d) statements with a less commonly used (or known) “\” integer division operator.  In other words, we used “n \ d” in place of “INT( n / d)” wherever an integer result is required.  This change alone resulted in a gain of 11 seconds.  One word of caution if you plan to use the “\” division operator:  The “n” and “d”  are converted to integers before doing the division.
  2. We declared each of the variables used in the subsequent formulas and gained yet another remarkable 11 seconds.  Although John Walkenbach and certainly many other authors stress declaring variables, it is surprising to see very few published VBA procedures that actually put this to practice.

Results:

The results of our Time Tests appear in the table below.  Note that we ran several timed iterations for each change knowing that some variations in process time can occur.

EasterDay = 31.34375 Original Code uses INT( n / d ) to convert Division Results
EasterSunday = 20.828125 1.  Replaced INT ( n / d) with (n \ d)
EasterDate = 22.28125 Original Code – Alternate Calculation Method
Re-Test to Confirm Timing
EasterDay = 30.9375 Original Code uses INT( n / d ) to convert Division Results
EasterSunday = 20.921875 1.  Replaced INT ( n / d) with (n \ d)
EasterDate = 22.25 Original Code – Alternate Calculation Method
Re-Test to Confirm Timing
EasterDay = 30.90625 Original Code uses INT( n / d ) to convert Division Results
EasterSunday = 21.265625 1.  Replaced INT ( n / d) with (n \ d)
EasterDate = 22.25 Original Code – Alternate Calculation Method
Re-Test to Confirm Timing
EasterDay = 31.078125 Original Code uses INT( n / d ) to convert Division Results
EasterSunday = 9.171875 2.  Variables DECLARED!
EasterDate = 22.1875 Original Code – Alternate Calculation Method
Re-Test to Confirm Timing
EasterDay = 31.109375 Original Code uses INT( n / d ) to convert Division Results
EasterSunday = 9.171875 2.  Variables DECLARED!
EasterDate = 22.171875 Original Code – Alternate Calculation Method

The EasterSunday procedure contains the changes described above.  We achieved a total savings of approximately 22 seconds.  The integer division methods used both yield the same result, however, one is clearly faster than the other.

The gains made by declaring variables were just as significant.  In VBA, undeclared variables default to a “variant” type.  Although variant types are more flexible by definition, performance diminishes significantly. We saved at least an additional 11 seconds simply by declaring variables.  Variable declarations are to VBA as policies are to your company, they define the “size and scope” of the working environment.  Undefined policies or vague specifications create ambiguity and generate waste.

Lessons Learned:

In manufacturing, a 70% improvement is significant; worthy of awards, accolades, and public recognition.  The lessons learned from this example are eight-fold:

  1. For manufacturing, do not assume the current working process is the “best practice”.  There is always room for improvement.  Make time to understand and learn from your existing processes.  Look for solutions outside of your current business or industry.
  2. Benchmarking a current practice against another existing practice is just the incentive required to make changes.  Why is one method better than another?  What can we do to improve?
  3. Policy statements can influence the work environment and execution of procedures or methods.  Ambiguity and lack of clarity create waste by expending resources that are not required.
  4. Improvements to an existing process are possible with results that out perform the nearest known competitor.  We anticipated at least being able to have the two routines run at the similar speeds.  We did not anticipate the final EasterSunday routine to run more than 50% faster than our simulated competitive benchmark (EasterDate).
  5. The greatest opportunities are found where you least expect them.  Learning to see problems is one of the greatest challenges that most companies face.  The example presented in this simple analogy completely shatters the expression, “If it ain’t broke, don’t fix it.”
  6. Current practices are not necessarily best practices and best practices can always be improved.  Focusing on the weaknesses of your current systems or processes can result in a significant competitive edge.
  7. Accelerated modeling can highlight opportunities for improvement that would otherwise not be revealed until full high volume production occurs.  Many companies are already using process simulation software to emulate accelerated production to identify opportunities for improvement.
  8. The most important lesson of all is this:

Speed of Execution is Important >> Thoughtful Speed of Execution is CRITICAL.

We wish you all the best of this holiday season!

Until Next Time – STAY Lean!

Vergence Analytics

At the onset of the Holiday project, the task seemed relatively simple until we discovered that the rules for Easter Sunday did not follow the simple rules that applied to other holidays throughout the year.  As a result we learned more about history, astronomy, and the tracking of time than we ever would have thought possible.

We also learned that Excel’s spreadsheet MOD formula is subject to precision errors and the VBA version of MOD can yield a different result than the spreadsheet version.

We also rediscovered Excel’s Leap Year bug (29-Feb-1900).   1900 was not a leap year.  The leap year bug resides in the spreadsheet version of the date functions.  The VBA date function recognizes that 29-Feb-1900 is not a valid date.

Agility Through Problem Solving: a Model for Training and Thinking

We tend to use analogies when we are discussing certain topics, introducing new concepts, or simply presenting an abstract idea.  Analogies are intended to serve as a model that people understand, can relate to or identify with, and, more importantly, remember.  Our challenge is to identify a simple model that can be used to teach people to identify and solve problems – a core competency requirement for lean.

We have learned that teaching people to see problems is just as important as teaching them to solve problems.  Our education system taught us how to use the scientific method to solve problems that were already conveniently packaged in the form of a question or modeled in a case study.  Using case studies for teaching is typically more effective than traditional “information only” or “just the facts” methods.  (The government of Ontario is presently considering a complete overhaul of the education system using case studies as a core instruction method.)

The effectiveness of any training people receive is compromised by time – the retention span.  Our school systems are challenged by this at the start of every school year.  Teachers must re-engage students with materials covered in the last semester or topics covered prior to the break.  In business we may be too eager to provide training at a time when current business activities are not aligned for the new skills to be practiced or exercised.  A commitment to training also requires  a commitment to develop and routinely exercise these skills to stay sharp.

One of the fundamental rules of engagement for lean is to eliminate waste, where value added activities are optimized and non-value added activities are reduced or eliminated.  Although it may appear that we have identified the problem to be solved, in reality we have only framed the objective to be achieved.  We understand that the real solution to achieving this objective is by solving many other smaller problems.

The Sudoku Analogy – A Model for Finding and Solving Problems

A favourite past time is solving Sudoku puzzles, the seemingly simple 9 x 9 matrix of numbers just waiting for someone to enter the solution.  The reasons for selecting and recommending Sudoku as an introductory model for training are as follows:

  1. Familiarity:  Sudoku puzzles are published in all daily newspapers and numerous magazines and they have become as popular as cross-word puzzles.  Most people have either attempted to solve a puzzle or know someone who has.
  2. Rules of Engagement:  the rules of the game are simple.  Each standard Sudoku puzzle has 9 rows and 9 columns that form a grid of 81 squares.  This grid is further divided into nine 3 x 3 sub-sections.  The challenge is to enter the digits 1 through 9 into the blank spaces on the grid.  Every row, column, and 3 x 3 sub-section of the grid must contain one and only one of each digit.  We refer to these as “rules of engagement” as opposed to “framing the problem”.
  3. Degrees of Difficulty:   Sudoku puzzles are typically published in sets of 3 puzzles each having varying degrees or levels of difficulty.  Each level typically requires more time to complete and requires the player to use more complex reasoning or logic skills.  The claim is that all puzzles can be solved.
  4. Incremental or Progressive Solutions:  Sudoku solutions are achieved incrementally by solving instances of smaller problems.  In other words, the solution builds as correctly deduced numbers are added to the grid.  New “problems” are discovered as part of the search for the final solution.
  5. Variety:  every Sudoku game is different.  While some of the search and solve techniques may be similar, the problems and challenges presented by each game are uniquely different.  Although the rules of engagement are constant, the player must search for and find the first problem to be solved.
  6. Single Solution:  a multiple number of solutions may appear to satisfy the rules of the game, however, only one solution exists.  Learning to solve Sudoku puzzles may be a challenge for some players, however, even seasoned Sudoku players can be stumped by some of the more advanced level puzzles.  To this end, they are ever and always challenging.
  7. Skill Level:  Sudoku puzzles do not require any math skills.  Numbers are naturally easier to remember and universal.  Letters are language dependent and the game would lose international appeal.
  8. Logical:  deductive reasoning is used to determine potential solutions for each empty square in the grid.  As the game is played, a player may identify a number of potential solutions for a single square.  These final solution will eventually be resolved as the game is played.

In practice

We recommend introducing the team to Sudoku using an example to demonstrate how the game is played.  It is best to discuss some of the strategies that can be used to find solutions that eventually lead to solving the complete puzzle.  The Sudoku model will allow you to demonstrate the following ten objectives:

  1. Look for Options:  The solution for the problem to be solved may consist many other smaller problems of varying degrees of difficulty.
  2. Break down the problem:  There may be more than one problem that needs to be solved.  Every Sudoku puzzle represents many different problem instances that need to be resolved before arriving at the final solution.  Each incremental solution to a problem instance is used to discover new problems to solve that also become part of the overall solution.  This may also be termed as progressive problem solving.
  3. Multiple solutions – One Ideal:  There may be times where more than one solution seems possible.  Continue to solve other problems on the grid that will eventually reveal the ideal single solution.
  4. Prioritizing:  more than one problem instance may be solvable at the same time, however, you can only focus on one at a time.
  5. Focus:  Problem solving involves varying states of focus:
    • Divergence:  Expand the focus and perform a top-level search for a problem from the many to be solved
    • Convergence:  Narrow the focus on the specific problem instance and determine the specific solution.
  6. Test and Validate:  Every problem instance that is solved is immediately verified or validated against the other squares on the grid.  In other words the solution must comply with the rules of engagement.
  7. Incubation:  some puzzles can be quite difficult to solve.  Sometimes you need to take a break and return later with a fresh eyes approach.
  8. Action:  There is no defined or “correct” starting point.  The first problem instance to be resolved will be as unique as the number of players participating.  No matter where you start, the finished solution will be exactly the same.
  9. Tangents:  when entering a solution into a square, you may notice other potential problems or solutions that suddenly seemed to appear.  It is very easy to digress from the original problem / solution.  This is also true in the real world where “side projects” somehow appear to be the main focus.
  10. Method:  There is no pre-defined method or approach to determine what problem to solve first.  The only guiding principles for discovering the problem instance to be solved are the rules of engagement.

Lean companies train their teams to see problems and break them down into smaller problems with solvable steps.  Sudoku demonstrates the process of incremental or progressive problem solving.  Even with this technique it is possible to enjoy major break through events.  There are times when even seasoned Sudoku players will recognize the “break through point” when solving a puzzle.

Solve time is another element of the Sudoku puzzle that may be used to add another level of complexity to the problem solving process.  Our objective was not to create a competitive environment or to single out any individual skill levels whether good or bad.  Lean is a TEAM sport.

In Summary:

Sudoku solvers are able to hone their skills every day.  Perhaps Sudoku Masters even exist.  Imagine someone coming to work with the same simple focus to eliminate waste every day.  Although there is no preset solution, we are able to identify and consider any number of potential problems and solve them as quickly as we can.  The smaller problems solved are a critical part of the overall solution to achieve the goal.

Most professional athletes and musicians understand that skills are developed through consistent practice and exercise.  Repetition develops technique and speed.  Imagine a culture where discovering new opportunities or problems and implementing solutions  is just a normal part of the average working day.  This is one of the defining traits that characterize high velocity companies around the world.

Truly agile companies are experts at seeing and solving problems quickly.  They discover new opportunities in every day events that in turn become opportunities to exercise their problem seeing and solving skills.  Crisis situations are circumvented early and disruptions are managed with relative ease – all in a days work. 

The next time you see a Sudoku puzzle you may:

  • be inclined to pick up a pencil and play or
  • be reminded of the time you were inspired by the game to solve problems and reach new goals or
  • simply reflect on this post and ponder your next break through.

Until Next Time – STAY Lean!

Software Modeling for Standardized Work

The concept of Standard Work is understood in virtually any work environment and is not exclusive to the lean enterprise.  Typically the greater challenge of standardized work is actually preparing an effective document that adequately describes the “work” to be performed.

The objective of standardized work is to provide a documented “method” for completing a sequence of tasks that can be executed to consistently yield a quality product at rate, regardless of the person who is performing the work.  The documentation typically created usually falls short of this expectation.

The Ideal Model for Standardized Work

We would expect to find examples of well documented standardized work at Nuclear Stations, Military Installations, in Aerospace, and many other places where risks are high and operation sequences are critical.  High velocity, lean organizations recognize that a disciplined process approach is the key to discovering opportunities for improvement and to support future “problem” solving activities if required.

Computer programs are perfect models of standardized work in action.  They perform the same tasks day after day, collecting, storing, and processing data.  We have certain performance expectations although we seldom understand the inner workings of the programs themselves.

This is certainly true for the computers we deal with in our personal lives such as mobile phones, instant banking machines, GPS mapping systems, or the many “gaming” programs that people play.  Our interactions with the “program” are limited to the HMI or Human-Machine-Interface and represents only a tiny fraction of the thousands of lines of computer code that are executing the transaction requests in the background.

The Software Development Model

Although few of us may ever write a program, we do understand that every instruction or line of code in a program is critical to the successful execution of the program as a whole.  Every line of code represents a specific instruction, process sequence, or step that must be executed by the computer.  Similarly, standardized work identifies the specific steps that must be followed to successfully complete the task.

Any time a computer system “goes down” or critical error occurs, someone in the IT department is looking for the source of the problem.  The software is typically written to at least provide a hint as to where the problem may be.  In some cases the solution may be as drastic as rebooting the system or as simple as reloading the specific application.

We should be able to perform a similar analysis when a process fails to perform to expectations or when we are confronted with quality issues or other process disruptions or failures.  The ability to consistently repeat a sequence of steps is directly correlated to the quality and level of detail described in the standardized work document.

Aside:  An example from the Quality department:

Gage Repeatability and Reproducibility studies, also know as Gage R&R studies, are often used to validate the effectiveness of a measurement system or method (fixture or equipment) for a specific application.  If the Gage R&R result is less than 10%, the gauge or fixture is deemed to be acceptable; greater than 30% renders the measurement system unusable for the application.    The results can be evaluated statistically to determine or differentiate whether the “problem” is repeatability (equipment variability) or reproducibility (operator variability).

When the measurement system fails to meet the requirements of the application, a significant amount of time and effort is expended to achieve an acceptable result.  The gauging strategy is reviewed, including part / fixture net locations, quantity of net pads and / or pins, net pad and / or pin sizes, clamping sequences, and clamping pressures; all efforts to improve the measurement system.  Instructions are revised and operators are retrained accordingly.

In contrast, we seldom see the same level of time and effort expended to develop, analyze, test and document standardized work at the machine or station where the work is actually being performed.  Although the process may be improved to yield a quality product, the method or work instruction to achieve a consistent result is not adequately described or defined.

Understanding the tasks to be performed and the time required to perform them is essential to determine effective process cycle times (rates) and also to understand where changes to the process may yield improved performance.  This is of particular importance for companies using OEE as a key process metric.

Note:  an indicator that standardized work methods should be reviewed occurs when excuses for poor performance are attributed to a “new operator” or “steep learning curve”.

Extending the Program Model Concept

We can all appreciate the “built-in” or inherent discipline of computers executing thousands of lines of code in the same sequence every time the program runs.  To add complexity to our model, consider the discipline and learning that is necessary to write the code itself.  The software development team must understand the purpose of the program, how it will be used, design and create a user interface, determine programming algorithms to achieve the desired results and functionality, and ultimately they must write the code that will perform these functions.

Anyone who has attempted to write a program, or knows someone who has, will also be familiar with the term “DEBUG”.  There may be at least as many hours spent testing and debugging code as there are when writing it.  Even after hundreds of hours of testing, some “Bugs” still make it into the real world.  Microsoft’s bug laden operating system releases have been the target of Apple Computer advertising campaigns for this very reason.

Some code may function without error when executed in isolation and some bugs may not be discovered until the module is interacting with the program as a whole.  In this regard, it is also important to consider the potential of interactions with other processes when developing standardized work.  Upstream and downstream operations may have a direct impact on the work being performed. 

The software development team must select the programming language that will be used to develop the final code and the individual programmers must also follow and understand the syntax and language protocols.  Although the product of the software development team is the “executable” program that the computer will run, we can be assured that the process for arriving at this final product is also quite rigorous.

Although we never get to see the native or original code, the modules are likely highly optimized, commented thoroughly, and well documented.  These comments are technically “non-value added” steps in the program, however, they usually describe the scope and purpose of the procedure or clarify any code intentions or algorithms.  These comments are valuable when debugging may be required or when the code is subject to future reviews.

The discipline of software development is not too far from the level of discipline that should be in place to develop standardized work.  The quality of standardized work processes would improve dramatically if each sequence was given the same level of scrutiny as a single line of code.

Making Improvements with Standardized Work

You may be wondering how flexibility exists in an environment of extreme discipline and rigid rules.  It is actually the rigidity and discipline that supports or encourages flexibility.  The discipline is in place to encourage managed change events without compromising current process knowledge or levels of understanding.  A well-defined process is much easier to understand and therefore is also easier to analyze for improvement opportunities.  The level of understanding should be such that a quantifiable margin or level of improvement can be predicted.

With reference to our software model, you will appreciate that the efficiency or speed of the program is dependent on the methods or algorithms that programmer used to develop the solution.  How many times have you stared at your computer wondering “what’s taking so long?”  The timing for a simple data sort can vary depending on the method or sorting algorithm chosen by the programmer.

The very language elements or functions that the programmer uses will have a profound effect on program execution time.  Many programmers have developed and use high precision “timing” functions to help optimize their code for efficiency and speed of execution.  Machine language level programmers are likely to know how many clock cycles each instruction requires.

Understanding the “process instructions” at this level creates very specific challenges with predictable outcomes and degree of certainty when changes are considered.  Changing an algorithm is quite different from simply changing a line of code within a specific function.  However, the scope, purpose, and impact of the change can be clearly defined and assessed in advance.

Last Words:

One of the more significant lean developments in manufacturing was the introduction of Quick Changeover and Single Minute Exchange of Dies (SMED).  The setup time reductions that have been achieved are truly remarkable and continue to improve with advances in technology.

When Quick Changeover and SMED programs were first introduced, most companies did not have a defined setup procedure or process.  The most significant effort was spent developing actual setup instructions:  identifying tasks to be completed, determining the sequence of events, who was responsible, and when they could be performed.

Ultimately external and internal setup activities were defined, setup teams were created, specific tasks and sequences of events were assigned and defined, and setup times were reduced from hours to 10 minutes or less.

Standardized Work is a fundamental element of Lean Manufacturing.  As notes are the language of musicians and make for great songs and sounds, so too is Standardized Work to a Lean organization.

Until Next Time – STAY Lean!