**Tricks of the Trade**

Work smarter not harder! If we’re honest with ourselves, we realize that sometimes we have a tendency to make things more difficult than they need to be. A statistics guru once asked me why a sample size of five (5) is commonly used when plotting X-Bar / Range charts. I didn’t really know the answer but assumed that there had to be a “statistically” valid reason for it. Do you know why?

Before calculators were common place, sample sizes of five (5) made it easier to calculate the average (X-Bar). Add the numbers together, double it, then move the decimal over one position to the left. All of this could be done on a simple piece of paper, using some very basic math skills, making it possible for almost anyone to chart efficiently and effectively.

- Sample Measurements:
- 2.5
- 2.7
- 3.1
- 3.2
- 1.8

- Add them together:
- 2.5+2.7+3.1+3.2+1.8 = 13.3

- Double the result:
- 13.3 + 13.3 = 26.6

- Move the decimal one position to the left:
- 2.66

To calculate the range of the sample size, we subtract the smallest value (1.8) from the largest value (3.2). Using the values in our example above, the range is 3.2 – 1.8 = 1.4.

The point of this example is not to teach you how to calculate Average and Range values. Rather, the example demonstrates that a simple method can make a relatively complex task easier to perform.

**Speed of Execution**

We’ve written extensively on the topic of Lean and Overall Equipment Effectiveness or OEE as means to improve asset utilization. However, the application of Lean thinking and OEE doesn’t have stop at the production floor. Can the pursuit of excellence and effective asset utilization be applied to the front office too?

Today’s computers operate at different speeds depending on the manufacturer and installed chip set. Unfortunately, faster computers can make sloppy programming appear less so. In this regard, I’m always more than a little concerned with custom software solutions.

We recently worked on an assignment that required us to create unique combinations of numbers. We used a “mask” that is doubled after each iteration of the loop to determine whether a bit is set. This simple programming loop requiring this is also the kernel or core code of the application. All computers work with bits and bytes. One byte of data has 8 bit positions (0-7) and represents numeric values as follows:

- 0 0 0 0 0 0 0 0 = 0
- 0 0 0 0 0 0 0 1 = 1
- 0 0 0 0 0 0 1 0 = 2
- 0 0 0 0 0 1 0 0 = 4
- 0 0 0 0 1 0 0 0 = 8
- 0 0 0 1 0 0 0 0 = 16
- 0 0 1 0 0 0 0 0 = 32
- 0 1 0 0 0 0 0 0 = 64
- 1 0 0 0 0 0 0 0 = 128

To determine whether a single bit is set, our objective is to test it as we generate the numbers 1, 2, 4, 8, 16, 32, 64 and so on – each representing a unique bit position in binary form . Since this setting and testing of bits is part of our core code, we need a method that can double a number very quickly:

- Multiplication: Multiply by Two, where x = x * 2
- Addition: Add the Number to Itself, where x = x + x

These seem like simple options, however, in computer terms, multiplying is slower than addition, and SHIFTing is faster than addition. You may notice that every time we double a number, we’re simply shifting our single “1” bit to the left one position. Most computers have a built in SHL instruction in the native machine code that is designed to do just that. In this case, the speed of execution of our program will depend the language we choose and how close to the metal it allows us to get. Not all languages provide for “bit” manipulation. For this specific application, a compiled native assembly code routine would provide the fastest execution time. Testing whether a bit is set can also be performed more efficiently using native assembly code.

**Method Matters**

The above examples demonstrate that different methods can be used to yield the same result. Clearly, the cycle times will be different for each of the methods that we deploy as well. This discussion matters from an Overall Equipment Effectiveness, OEE, perspective as well. Just as companies focus on reducing setup time and eliminating quality problems, many also focus on improving cycle times.

Where operations are labour intensive, simply adding an extra person or more to the line may improve the cycle time. Unless we change the cycle time in our process standard, the Performance Factor for OEE may exceed 100%. If we use the ideal cycle time determined for our revised “method”, it is possible that the Performance Factor remains unchanged.

**Last Words**

The latter example demonstrates once again why OEE cannot be used in isolation. Although an improvement to cycle time will create capacity, OEE results based on the new cycle time for a given process may not necessarily change. Total Equpiment Effectiveness Performance (TEEP) will actually decrease as available capacity increases.

When we’re looking at OEE data in isolation, we may not necessarily the “improved” performance we were looking for – at least not in the form we expected to see it. It is just as important to understand the process behind the “data” to engage in a meaningful discussion on OEE.

**Your feedback matters**

If you have any comments, questions, or topics you would like us to address, please feel free to leave your comment in the space below or email us at feedback@leanexecution.ca or feedback@versalytics.com. We look forward to hearing from you and thank you for visiting.

**Until Next Time – STAY lean**

[twitter-follow screen_name=’Versalytics’ show_count=’yes’]

**Versalytics Analytics**