Category: Advanced Lean Manufacturing

Advanced Techniques are essentially enhancements or improvements to the existing fundamental process or system.

Lean Code Execution

The performance of your application is important to your customer.  As stated previously, the performance your application is as dependent on the knowledge and skills of the programmer as they are on the language used to create it.

This excerpt from the book “SQL FOR .NET PROGRAMMERS” by D. M. Bush serves as another example where the programmer’s skills created a problem that could have been avoided if performance was a key consideration:

They decided to be clever and created a name/value pair system instead of putting it all in one row.  It should be no surprise to anyone that once this went into production it couldn’t hold up in real day-to-day use.
… They obviously had a problem.
But this could have been avoided if performance had been part of the criteria.  As the programmer I was discussing this with said, “All they would have had to do is throw in a million fake ‘rows’ and they would have known right away they had a problem before they built out the rest of the system.”
I’m not saying you need to optimize the guts out of a system, but anything that takes more than a few seconds to return a couple thousand rows, definitely has an issue.

Working and Performing are NOT equal

An application that works is not necessarily an application that performs.  As suggested in the excerpt, the root of the performance problem began with the scope of the application itself:  Performance was NOT part of the design criteria.  I contend that this is simply an excuse and not the root cause.

A skilled programmer should have sufficient knowledge to understand when and where code optimization and subsequent performance testing is required.  From the excerpt above, the performance impediment is directly assignable to the initial design and the lack of a test plan.

A DVP&R (Design Verification Plan & Report) is one of many tools used to develop new products and materials in the automotive industry.  A skilled programmer understands that unit testing is critical component of application development.

Performance Tuning

Many programmers take advantage of performance monitoring tools when testing their code.  If you have the opportunity to write T-SQL queries using Microsoft’s SQL Server Management studio, you will appreciate using the various performance monitoring tools available and query execution plan.

Not surprisingly, performance tuning and optimization efforts should focus on code where processes or functions are subject to repeated execution.  Consider that SQL is typical used to work with thousands or millions of records at a given time.  Fractions of a second on each iteration can quickly add up to minutes or hours of “wait” time.

I often say, “Be careful who teaches you.”  Many tutorials and books can show you “how” to write code that works.  However, I prefer those that also explain “why” and suggest methods for improving or enhancing performance.

For example, section 5 of the book Learning Python by Fabrizio Romano (Packt Publishing Ltd) is devoted to saving time and resources and echoes the sentiments expressed here.  Certainly, some books are entirely devoted to improving and optimizing performance.

The code we use to perform a given task is critical to the performance of the application.  Various algorithms exist to perform a variety of tasks where some will perform better than others depending on the circumstances.  By way of example, consider an application that continually requires a large number of elements to be sorted.  A programmer who understands the application will implement the best sorting algorithm from the many that are available.

Programmers solve problems!  Just for fun, consider the 2 Eggs Dropping problem as presented at “ProgrammerInterview.com“.  Although the solution is presented, it is interesting to note the variety of responses to this same problem on “Quora’s Dropping Eggs Q&A page“.  You have a first hand opportunity to see how different people approach the same problem to arrive at a solution.

The “Programmer Interview” (programmerinterview.com) web site presents a series of questions and solutions for a variety of programming languages (Java, C/C++, SQL, JavaScript, PHP and more) that make for interesting reading and possibly some learning.  “The first nonrepeated character” is another interview problem where an explanation of the algorithm’s efficiency is required.

DRY is Lean

DRY, an acronym for “Don’t Repeat Yourself”, is a programming principle that is easily applied to writing code effectively.  It is certainly easier to optimize a function or procedure that is written once and used in many places.  Libraries or packages make it easy to update a single piece of code that can be deployed across multiple applications.

There was a time …

Programmers once took pride in writing fast code that was “tight” and required the minimum resources to execute successfully.  When I started programming in the early 1980’s, machines were considerably slower with extremely limited memory and storage.

The concern today is that many “programmers” are simply using “building blocks” of code written by others without really make an attempt to understand what is happening behind the scenes.  As a result, resource hungry applications are created where poor code is masked by faster multicore processors and seemingly unlimited memory and storage.

The applications may have a great look and feel, but if the performance is lacking so will customer satisfaction.  The references here suggest that programmers are intuitively inclined to find the “best” fit, high performance algorithm.  That “performance” criteria needs to be defined seems counter-intuitive to the best practices of a good programmer.

Until Next Time – STAY lean!

Versalytics - Logo (293x293)

Related References:

2 Eggs Dropping – ProgrammerInterview.com

2 Eggs Dropping – Quora Question / Answer Forum

SQL For .Net Programers by D.M. Bush – Version 2.0 (Second Edition), Text copyright 2013-2016, DMB Consulting, LLC, ISBN:  1533071128, ISBN-13:  978-1533071125.

Design Verification Plan & Report (DVP&R) Services – Intertek.com

Learning Python by Fabrizio Romano, Packt Publishing Ltd., ISBN 978-1-78355-171-2

Vital Introduction to Machine learning with Python:  Best Practices to Improve and Optimize Machine Learning Systems and Algorithms (Computer Coding).

 

Advertisements

Lean Code – Part 2

Our article on “Lean Code” strongly suggests that the principles of lean can also be applied to the realm of software development, applications, and more specifically, programming.

Python has evolved to become a very popular and powerful programming language.  However, as mentioned in “Lean Code“, the performance of your application or program is as dependent on the skills of the programmer as they are on the capabilities of the programming language itself.

An example of skill versus language can be found in “Python for Data Science – For Dummies – A Wiley Brand” by John Paul Mueller and Luca Massaron (ISBN:  978-1-118-84418-2).  Page 106 of the book states:

It’s essential to realize that developers built pandas on top of NumPy.  As a result, every task you perform using pandas also goes through NumPy.  To obtain the benefits of pandas, you pay a performance penalty that some testers say is 100 times slower than NumPy for a similar task.

The functionality offered by pandas makes writing code faster and easier for the programmer, however, the performance trade-off exists for the end user.  Knowing when to use one module over the other depends on the programmer’s understanding of the language as opposed to simply providing a specific functionality.

Python for Data Science provides sufficient information to decide the best fit case for either pandas or NumPy.  The relevance of sharing this is to stress the importance of continually reading, learning, and understanding as much as possible about your language of choice for a given application.

From the end user’s perspective, performance matters and everyone wants it “yesterday”.  So, the question is, “Do we code quickly and sacrifice performance or sacrifice delivery for quick code?  What would you do?

Until Next Time – STAY lean!

Versalytics - Logo (293x293)

Related Articles / Books:

 

Lean Six Sigma – After the Fact

Lean Six Sigma (McGraw-Hill)Effective problem solving, holistic process management, and data-driven decision making using a systematic and structured approach comprise the core elements and ideology of Lean Six Sigma, commonly referred to as LSS, and is premised on achieving the following four (4) goals and objectives:

  1. Reduce operational cost and risk
    • Objectives:
      1. Increase efficiency
      2. Reduce or eliminate variance
  2. Increase revenue
    • Objectives:
      1. Reduce or eliminate losses
      2. Zero Defects
  3. Improve customer service
    • Objectives
      1. Perfect Value
      2. Enhanced customer satisfaction
      3. Delivery on time and in full
  4. Continuous improvement
    • Objectives
      1. Improve effectiveness and efficiencies
      2. Incremental changes daily as there’s always a better way and more than one solution.

Those familiar with Six Sigma will recognize the DMAIC problem solving model where DMAIC is an acronym for:

  • D – Define the problem
  • M – Measure
  • A – Analyze
  • I – Improve
  • C – Control

In their book, Lean Six Sigma (The McGraw-Hill 36-Hour Course), authors Sheila Shaffi and Shahbaz Shahbazi state:

“Six Sigma’s fundamental goal is to reduce operational variance by improving the overall quality and performance levels of business processes.”

A broad definition of variance can be described as a measure of the spread or difference between numbers in a data set.  From a business perspective, variance is the difference between planned and actual results.

Lean Six Sigma can be applied to any process where measurable variance exists.  For this reason, Lean Six Sigma is not constrained to the realm of manufacturing or product quality alone.

After the Fact

Many companies attempt to use lean six sigma tools as a means to solve problems when they are reported by the customer – after the fact.  Although variance can only be observed or measured from a process that already exists, this is not to suggest that lean thinking or lean initiatives can only be applied after the fact.

Customer complaints are indicative of inadequate controls, containment measures, and / or a lack of understanding of customer expectations.  One of the objectives of Lean Six Sigma is to prevent non-conformances from occurring at the source.

Performance expectations serve as the base line by which variance is measured – Plan versus Actual.  Predictability requires the analysis of any variance – good or bad – in our systems, processes, products or services where any variance from plan represents an opportunity to discover and increase our current level of understanding of the current state and to make the necessary improvements from this new found knowledge.

The focus of lean is the pursuit of perfect value by optimizing the flow of products and services through the entire value stream through the elimination of waste.  The seven (7) forms of waste are summarized as follows:

  • Defects
  • Waiting
  • Overproduction
  • Unnecessary transportation
  • Inventory
  • Over-processing
  • Motion

Often times, lean workshops are held to identify elements of the causes of waste in the value stream in an effort to achieve single piece flow, standardization, increased efficiencies, and improved resource utilization.  Once the value stream is created, a constraint or bottle neck that impedes the flow of value in your process typically becomes the focus of improvement initiatives.

The same “deep dive” that occurs in a workshop or “after the fact”, when non-conformances are identified internally or externally by the customer, can be performed when the process or system is being developed and designed.

Why Lean Six Sigma?

Lean serves to improve the speed and efficiency of processes, while Six Sigma attempts to improve quality and reduce any variation in the process.  Although Lean and Six Sigma initiatives can co-exist as separate entities, a strong correlation exists between them – inextricably intertwined.

OEE as an Example

Overall Equipment Effectiveness or OEE is one of several key indicators commonly used in manufacturing and can be used to identify the major contributors to “lost” production time – essentially accounting for time where a given asset is “idle” when it should be producing parts.

Downtime events result in machine “idle” time and can occur at any time for any number of reasons.  In many cases, however, it is possible to anticipate and mitigate the duration of these events.  From a process design perspective, consider options that may prevent the downtime event from ever occurring.

In manufacturing, a robust process performs at rate and yields high quality products.  Planned maintenance and rigorous process controls sustain predictable performance levels.  Conversely, significant variances observed in the process will yield infinitely variable OEE results and will be evidenced in Availability, Performance, and Quality.

Unplanned downtime events consume available capacity that in turn constrains planned preventive maintenance activities.  Failing to address variance and anomalies in the process significantly impedes our ability to achieve flawless execution and improve OEE.

Lean Six Sigma provides a model for solving problems – even before they occur – regardless of their scope and scale.  The core process must be capable of consistently yielding a quality product that conforms to customer requirements and expectations.

Lean Six Sigma provides the tools to identify and eliminate or minimize the affect of variance in your process.  Some would argue that unless the process is stable, it becomes increasingly difficult to assess the affect of changes that are introduced.  To the contrary, variance measurements will reflect the affect of any change.

Typically, decisions to change a process are based on inadequate data.  Lean Six Sigma provides the tools we require to perform an in-depth analysis and interpretation of measurements that enable us to make data driven decisions.  ANOVA or analysis of variance is commonly used to provide a statistical assessment of process capability.

Although an OEE of 85% is considered world class, our OEE goal is to continually improve and yield a positive trend over time.  We don’t have to settle for just being world class.

In Summary

Variance is present everywhere and in everything we do.  A culture that embraces and fosters Lean Six Sigma seeks to achieve perfect value and is committed to the pursuit of excellence through flawless execution, the elimination of waste, and continuous improvement in all facets of their business.

While we can learn from our mistakes, the ideal solution is to avoid making them at all.  Deploying a lean six sigma strategy from the onset of any new product or service significantly increases the probability of success.

If customer satisfaction is first and foremost in your organization then Lean Six Sigma is your strategy of choice.  Get your copy of “Lean Six Sigma – The McGraw-Hill 36 Hour Course” by Sheila Shaffie and Shahbaz Shahbazi to learn how to integrate and reap the benefits of all that Lean Six Sigma has to offer.

Until Next Time – STAY lean!

Versalytics (90x90)

Related Articles / Books:

Lean Code

Software applications exist to perform a wide variety of tasks and for any given task there are many applications to choose from.  As anyone who has visited an “App Store” knows, the number of available applications can range from a select few to thousands.  Your performance criteria provide a means of selecting the best application for you or your company.

Customers will question whether your application is worth the investment of both money and time.  It is from this perspective that lean code serves the programmer’s objective to deliver maximum value to ensure the customer is satisfied with their purchase.  Those who buy from the App Store are only concerned with two things:

  1. How much does it cost?
  2. What can it do for me?

The customer’s perspective quickly changes after the purchase to:

  1. Did I get what I paid for?
  2. Does the app perform as expected?

Lean serves to maximize customer value through the elimination of waste.  To some, this translates to providing a low cost application in the shortest time possible.  From our perspective, lean translates to an application’s ability to perform to customer expectations.

Performance Matters

In our view, lean code is determined by an application’s performance – “speed of execution” – not development time.  It is possible to write a fully functioning application in a relatively short period of time using a high level programming language such as Python.  However, the performance of the application may be substantially less than that of an equivalent application written in C.

The best choice of benchmarks to measure performance is real applications… Attempts at running programs that are much simpler than a real application have led to performance pitfalls. – The Computer Language Benchmarks Game – Toy Benchmark Programs

HelloWorld-GoLangWhen personal computers were first introduced to market, they were slow, cumbersome, constrained by memory, and disk storage was extremely limited.  The need to write “tight” code to provide as many features as possible was a given.

Today, computers have an abundance of memory, storage, and processing power giving rise to bloated software applications that are more feature focused and not necessarily performance driven.  A simple “HelloWorld.com” program could be written using debug and required only 17 bytes.  By comparison, the most basic “Hello World” program written in the GO programming language compiles to create a 1,624,576 byte “.exe” file.

Programming Languages

Performance implications of the language used for a given application cannot be underestimated.  Consider that the C programming language is consistently used as the benchmark by which all other programming languages are compared.  Unless you are programming in Assembly Language, few languages can touch the performance of C.

This is not to suggest that performance is solely dependent on the programming language used to create the application itself.  The performance of an application is as dependent on a programmer’s knowledge and ability to effectively apply the capabilities of the selected programming language to efficiently perform a given task.

Speed of Execution versus Development Time

Sorting data is a common requirement for any software application and there are a number of algorithms to choose from including:  Quick Sort, Bubble Sort, Selection Sort, Insertion Sort, Merge Sort, In-Place Merge Sort, Introsort, Heap Sort, Comb Sort, Bucket Sort, Radix Sort, Tim Sort, Library Sort, and Counting Sort.

The algorithm selected by the programmer will determine how efficiently a sorted list can be generated.  A bubble sort is relatively easy to implement and requires minimal development time but executes slowly whereas a quick sort may require more development time but executes very quickly.  In reality, the best sorting algorithm is not one but a hybrid of multiple algorithms combined into a single sorting solution.

Building Blocks or Stumbling Blocks

Interpreted and high level languages are made possible by the continued development and availability of numerous modules, libraries, frameworks, and API’s (Application Program Interface) and can save programmers a tremendous amount of time and effort when developing highly complex applications.  Unfortunately, this can also give rise to increasing demands on the resources such as available memory and can impede an application’s performance.

All of the advancements made to simplify and reduce development time give rise to increased functionality that may not necessarily be required by the application.  Either a developer must attempt to write their own interfaces or potentially suffer the consequences of having to use the packages that are available.

While it is easy to implement and use various packages, another downside is not fully knowing what is really happening behind the scenes.  As we have already noted, a number of algorithms are available to sort our data.  It is a simple matter to write “sort (‘a’, ‘d’, ‘e’, ‘c’, ‘b’) and expect that the list will be sorted correctly, however, we have no understanding of the algorithm used to return the result.

Economies of Scale

One reason for concerning ourselves with algorithms and our code’s speed of execution is to understand whether our application will scale, especially where high volumes of data storage and retrieval may be realized.

Excel is a widely used spreadsheet application that is capable of working with relatively large data sets.  For relatively small data sets, Excel is the perfect solution and offers a vast array of capabilities to work with our data.  However, as the spreadsheet continues to grow, performance begins to suffer.  In addition, the size of the data set is limited.  In contrast, a relational database management system such as Microsoft’s SQL Server can easily manage millions of rows of data across a wide ranging number of tables.

Although the applications share a certain perceived level of common functionality, they are radically different in their implementation and capabilities.  If you have the opportunity to use SQL Server, you will note the emphasis on execution plans and performance.  However, as we have already noted regarding language skills, there is a stark difference between knowing SQL and using it to write effective and efficient queries.

The Best of Both Worlds

The point of comparing Excel and SQL Server is to recognize how each application provides value to the user.  Excel is feature driven and able to work with moderately sized data sets whereas SQL Server is performance driven, offering relatively few features and able to work with extremely large data sets.

For this reason, it is not surprising that Excel can connect to SQL Server as a data source.  The user can now have the best of both worlds where highly efficient SQL Server Queries can seamlessly provide data to a workbook or worksheet where the feature rich capabilities of Excel can be applied.

This comparison between Excel and SQL Server also demonstrates that not all aspects of programming require the same level of code optimization.  There is not much that can be done to improve the efficiency of a process where the application is waiting for the user.  As such, the “value stream” should focus on code where the user is waiting for our application to complete a given task.

Lean Code

To cite from The McGraw-Hill 36-Hour Course “Lean Six Sigma” by Sheila Shaffie and Shahbaz Shahbazi, ISBN 978-0-07-174385-3, the following statement can easily be applied to application development from a “Software as a Service” (SAAS) perspective:

Lean Six Sigma is based on the premise that in order to deliver service and product excellence, firms must not only have an in-depth knowledge of their internal processes, but also have a profound understanding of customers’ current expectations and future needs.

Although we have only touched on a few elements of Lean Code, we have identified the need to provide our customers with high performance solutions that will scale to meet their growing demands.  Processes are not only those used to run our business but also include the underlying processes or value streams that comprise the code in our applications.

Lean thinking applies to all facets of our business from customer service and operations management to software development and application performance.  Increasing value to our customers and our stakeholders is the objective of our lean initiatives.

 

Until Next Time, STAY lean!

Versalytics

Related Links:

SQL Performance Explained — Averlytics.com

“Use the Index Luke” is the free to read on-line version of the book “SQL Performance Explained“. If the book is free, why even mention the official “SQL Performance Explained” title for the hard copy? There are two primary reasons for purchasing a copy for my library. One is having a version of my own […]

via SQL Performance Explained — Averlytics.com

Kaikaku – Radical Change

Lean has been around for a long time and the naming was not Toyota’s idea. The proof of wisdom is in the results. I don’t sell “lean” per se. I am a change agent with a proven track record that speaks for itself.

Taking a company out of the red and into the black in a matter of months can’t just be described as an exercise in “lean” or “lean thinking.” I’ve had some company presidents call the transformation a miracle.

It’s about radical change, quickly, and effectively. It’s not about consulting and advising. It’s about identifying the opportunities, identifying solutions, and executing changes – immediately! Kaikaku is the Japanese term for radical change.

There’s little time for discussion.  Just get it done, monitor results, and correct negative trends at the earliest possible moment. Achieving radical change requires all the sense of urgency a crisis deserves. When the business is back on the road to recovery, the time will come when those at the top want to know how “you” did it.

Unfortunately, without a pattern of successes that speak for themselves, we are simply looking for an opportunity to apply what we think know from someone else’s experiences.  There is no prescription for a “lean” turnaround though there are methods that are commonly known and easily deployed that can be used to formulate an effective strategy.

It’s not uncommon to hear, “We have exactly the same problem except …”, and therein lies the reason why we need to embrace lean thinking as opposed to “simply” attempting to copy another company’s solutions.  How likely is it that the circumstances your company now faces are exactly the same as those of another company to yield the requirement for their solution?

Use the tools to develop a solution that addresses YOUR problem, YOUR exceptions, and YOUR expectations.  The growth of your business depends more on what makes your company different from the competition … not your similarities.

Until Next Time – STAY lean!

Versalytics

Related Articles

Timing is Everything – OEE

Flawless Execution – Performance to Plan

Overall Equipment Effectiveness, OEE, is as much about “when” as it is about “how”.  The objective of OEE is to identify opportunities that enable us to maximize the time available to produce quality parts at rate – the ideal cycle time.  This ultimately affects our ability to predict when production should start and finish in kind.

The Plan

Having a plan and executing a plan are far from being one and the same.  Having a plan suggests that we already understand where “losses” are expected to occur.  As such, our ability to execute according to plan is the difference between predictive performance expectations and actual performance results.

The variance between expected and actual results directly correlates to how well we understand our processes – regardless of outcome.  In this same context, the degree of variance observed should also be reflected in the results of our OEE.

This becomes relevant when we consider where we think improvements are required.  If we are unable to predict or anticipate the performance of our processes in their current state, how is it possible to truly identify the return on investment for incremental improvements in the future?

Statistical Process Control (SPC)

How much variance do you observe in the results of your OEE from one run to another?  I wrote a post several years ago titled “Variance, OEE’s Silent Partner (Killer)“, that discusses this concept in greater depth.

The key to improving OEE begins by eliminating the excess variance in the results.  In other words, to control OEE requires us to eliminate the sources of variation.  When the results become predictable, the opportunity to control OEE begins.

Availability is typically the greatest contributor to observed differences in OEE.  The primary reasons typically include unexpected machine faults and / or process failures.  An effective preventive maintenance program will minimize and eventually eliminate the effect of “unexpected” downtime events on your OEE results.

In Conclusion

There is more to process performance than monitoring downtime events, speed, and first time through quality levels.  Performance to plan extends the concept to include whether parts are running when they were actually scheduled to run.

Predictable processes provide for greater flexibility in scheduling as do efforts to reduce setup / change over times and increase throughput.  Toyota’s Heijunka box, a visual scheduling and optimization methodology, relies heavily on predictable process performance and short setup / change over times.

Just in time manufacturing demands unparalleled performance that can be enhanced by using OEE as a key indicator in your production operations.

Until Next Time – STAY lean!

Versalytics

Related Articles

  1. Variance – OEE’s Silent Partner (Killer)
  2. Heijunka: The Art of Leveling Production