The concept of Standard Work is understood in virtually any work environment and is not exclusive to the lean enterprise. Typically the greater challenge of standardized work is actually preparing an effective document that adequately describes the “work” to be performed.
The objective of standardized work is to provide a documented “method” for completing a sequence of tasks that can be executed to consistently yield a quality product at rate, regardless of the person who is performing the work. The documentation typically created usually falls short of this expectation.
The Ideal Model for Standardized Work
We would expect to find examples of well documented standardized work at Nuclear Stations, Military Installations, in Aerospace, and many other places where risks are high and operation sequences are critical. High velocity, lean organizations recognize that a disciplined process approach is the key to discovering opportunities for improvement and to support future “problem” solving activities if required.
Computer programs are perfect models of standardized work in action. They perform the same tasks day after day, collecting, storing, and processing data. We have certain performance expectations although we seldom understand the inner workings of the programs themselves.
This is certainly true for the computers we deal with in our personal lives such as mobile phones, instant banking machines, GPS mapping systems, or the many “gaming” programs that people play. Our interactions with the “program” are limited to the HMI or Human-Machine-Interface and represents only a tiny fraction of the thousands of lines of computer code that are executing the transaction requests in the background.
The Software Development Model
Although few of us may ever write a program, we do understand that every instruction or line of code in a program is critical to the successful execution of the program as a whole. Every line of code represents a specific instruction, process sequence, or step that must be executed by the computer. Similarly, standardized work identifies the specific steps that must be followed to successfully complete the task.
Any time a computer system “goes down” or critical error occurs, someone in the IT department is looking for the source of the problem. The software is typically written to at least provide a hint as to where the problem may be. In some cases the solution may be as drastic as rebooting the system or as simple as reloading the specific application.
We should be able to perform a similar analysis when a process fails to perform to expectations or when we are confronted with quality issues or other process disruptions or failures. The ability to consistently repeat a sequence of steps is directly correlated to the quality and level of detail described in the standardized work document.
Aside: An example from the Quality department:
Gage Repeatability and Reproducibility studies, also know as Gage R&R studies, are often used to validate the effectiveness of a measurement system or method (fixture or equipment) for a specific application. If the Gage R&R result is less than 10%, the gauge or fixture is deemed to be acceptable; greater than 30% renders the measurement system unusable for the application. The results can be evaluated statistically to determine or differentiate whether the “problem” is repeatability (equipment variability) or reproducibility (operator variability).
When the measurement system fails to meet the requirements of the application, a significant amount of time and effort is expended to achieve an acceptable result. The gauging strategy is reviewed, including part / fixture net locations, quantity of net pads and / or pins, net pad and / or pin sizes, clamping sequences, and clamping pressures; all efforts to improve the measurement system. Instructions are revised and operators are retrained accordingly.
In contrast, we seldom see the same level of time and effort expended to develop, analyze, test and document standardized work at the machine or station where the work is actually being performed. Although the process may be improved to yield a quality product, the method or work instruction to achieve a consistent result is not adequately described or defined.
Understanding the tasks to be performed and the time required to perform them is essential to determine effective process cycle times (rates) and also to understand where changes to the process may yield improved performance. This is of particular importance for companies using OEE as a key process metric.
Note: an indicator that standardized work methods should be reviewed occurs when excuses for poor performance are attributed to a “new operator” or “steep learning curve”.
Extending the Program Model Concept
We can all appreciate the “built-in” or inherent discipline of computers executing thousands of lines of code in the same sequence every time the program runs. To add complexity to our model, consider the discipline and learning that is necessary to write the code itself. The software development team must understand the purpose of the program, how it will be used, design and create a user interface, determine programming algorithms to achieve the desired results and functionality, and ultimately they must write the code that will perform these functions.
Anyone who has attempted to write a program, or knows someone who has, will also be familiar with the term “DEBUG”. There may be at least as many hours spent testing and debugging code as there are when writing it. Even after hundreds of hours of testing, some “Bugs” still make it into the real world. Microsoft’s bug laden operating system releases have been the target of Apple Computer advertising campaigns for this very reason.
Some code may function without error when executed in isolation and some bugs may not be discovered until the module is interacting with the program as a whole. In this regard, it is also important to consider the potential of interactions with other processes when developing standardized work. Upstream and downstream operations may have a direct impact on the work being performed.
The software development team must select the programming language that will be used to develop the final code and the individual programmers must also follow and understand the syntax and language protocols. Although the product of the software development team is the “executable” program that the computer will run, we can be assured that the process for arriving at this final product is also quite rigorous.
Although we never get to see the native or original code, the modules are likely highly optimized, commented thoroughly, and well documented. These comments are technically “non-value added” steps in the program, however, they usually describe the scope and purpose of the procedure or clarify any code intentions or algorithms. These comments are valuable when debugging may be required or when the code is subject to future reviews.
The discipline of software development is not too far from the level of discipline that should be in place to develop standardized work. The quality of standardized work processes would improve dramatically if each sequence was given the same level of scrutiny as a single line of code.
Making Improvements with Standardized Work
You may be wondering how flexibility exists in an environment of extreme discipline and rigid rules. It is actually the rigidity and discipline that supports or encourages flexibility. The discipline is in place to encourage managed change events without compromising current process knowledge or levels of understanding. A well-defined process is much easier to understand and therefore is also easier to analyze for improvement opportunities. The level of understanding should be such that a quantifiable margin or level of improvement can be predicted.
With reference to our software model, you will appreciate that the efficiency or speed of the program is dependent on the methods or algorithms that programmer used to develop the solution. How many times have you stared at your computer wondering “what’s taking so long?” The timing for a simple data sort can vary depending on the method or sorting algorithm chosen by the programmer.
The very language elements or functions that the programmer uses will have a profound effect on program execution time. Many programmers have developed and use high precision “timing” functions to help optimize their code for efficiency and speed of execution. Machine language level programmers are likely to know how many clock cycles each instruction requires.
Understanding the “process instructions” at this level creates very specific challenges with predictable outcomes and degree of certainty when changes are considered. Changing an algorithm is quite different from simply changing a line of code within a specific function. However, the scope, purpose, and impact of the change can be clearly defined and assessed in advance.
Last Words:
One of the more significant lean developments in manufacturing was the introduction of Quick Changeover and Single Minute Exchange of Dies (SMED). The setup time reductions that have been achieved are truly remarkable and continue to improve with advances in technology.
When Quick Changeover and SMED programs were first introduced, most companies did not have a defined setup procedure or process. The most significant effort was spent developing actual setup instructions: identifying tasks to be completed, determining the sequence of events, who was responsible, and when they could be performed.
Ultimately external and internal setup activities were defined, setup teams were created, specific tasks and sequences of events were assigned and defined, and setup times were reduced from hours to 10 minutes or less.
Standardized Work is a fundamental element of Lean Manufacturing. As notes are the language of musicians and make for great songs and sounds, so too is Standardized Work to a Lean organization.
Until Next Time – STAY Lean!
Hey, you have a great blog here! Thank you for your info.