When asked about “what is business improvement”, one expects many possible responses.
A typical reply would be “improving business performance”. Such performance improvements could include financial targets such as ROI, RONA or EBIT, eliminating business waste or harnessing the latest technology advancements such as automation, robotics or AI.
Alternatively one could also respond with common buzzword terms such as “business transformation”, “continuous improvement”. However, as this subject is evolving, the entire approach towards business improvement would also be changing. Interestingly, when trying to define “what is business improvement” it is difficult to come by any accurate definition. Most often business improvement is linked to “business process improvement”. It may be fait accompli that business improvement is considered to be dependent on process efficiency. But generally it is questionable if this is the only factor that defines business improvement today?
I have experienced firsthand, that business improvement encompasses many theories and practices. Most notable is the recognition that only a holistic, systematic, sustainable approach is feasible. To thoroughly understand the interpretation of business improvement today we have to also look at the various techniques applied in this field.
Briefly summarized “one approach to this is focused improvement, which is primarily about elevating the performance of any existing environment, especially a business system, by working on eliminating its constraints” (Stevens, n.d.).
Alternatively, the other approach is “performance improvement which focuses on measuring the output of a particular business process or activity, then morphing or manipulating – however slightly – the process to increase the output, efficiency or the effectiveness of a particular process, activity or procedure” (Stevens, n.d.).
Both approaches are widely applied and accepted as suitable methodologies in the field of business improvement.
In the situation described in the assignment question, continuous (business) improvement is described rather as an end in itself than an approach to “make money now and more in the future”.
In view of the present methods used to optimize the business performance an imaginary organization, I have recognized that the primary reason for its ineffectiveness is its reductionist approach. Not surprisingly, bottom line improvements are not achieved in this business.
Besides the two mentioned improvement concepts, better opportunities present itself in the form of a 3rd approach arising from hard earned improvement practice. Throughout my career I was fortunate to be involved in alternative improvement avenues, paths that would allow a holistic view of the situation of the business on three primary dimensions: physical, informational and financial.
In the practical application of this alternative model, I further recognized the importance of collecting, analyzing and comprehending relevant, accurate data from all three dimensions. Very often it is also crucially important to interview process participants in their work environments, allowing to cross-verify or gather certain data or process parameters.
Besides, my most profitable knowledge gained is that we must not only “pay attention to the analytics, strategies and processes of improvement, but also the softer side of improvement, meaning people, feelings, leadership, relationships and the environment for improvement” (Stevens, n.d.).
In this post I will reflect on the pros and cons of typical business improvement approaches and the shortcomings thereof. Further I will elaborate on the merits of systematic, overarching business improvement techniques. No doubt, looking at the business situation in its entirety offers better models and processes to improve the existing position.
Systems thinking – the foundation of Lean business improvement
In the discussion that follows it’s important to contextualize the solutions against the backdrop of an imaginary organization that is supposed to be “improved”. Such a situational analysis will give the discussion and solutions greater clarity and depth.
“In its broadest sense, systems thinking is a framework that takes into account the interconnected nature of systems. It is also a thinking tool, which helps us look at the impact of feedback loops on how a system behaves; analyze specific situations to explain otherwise puzzling behaviors; and design interventions with an eye for potential unintended consequences” (Ballé, n.d.).
“Lean, on the other hand, is strictly a practice, not a philosophy. It is based on hands-on know how about how to teach people to improve their own processes in terms of both customer satisfaction and cost management by eliminating waste. As a practice-oriented movement, Lean is by and large wary of abstract thinking and generalizations” (Ballé, n.d.).
“Because Lean practices have been developed over several decades, an entire field of experience exists” (Ballé, n.d.) in terms of how to implement Lean tools & techniques. But the bottom line is: “Without an understanding of systems thinking, it is very difficult to get Lean right. Conversely, without the practice of Lean techniques, it is difficult to make systems thinking a day-to-day reality to improve system performance concretely” (Ballé, n.d.).
Unfortunately many companies applying Lean have recognized that pure Lean tools interventions reinforce rather than challenge management’s assumptions, viz., that the main cost saving opportunities lie in standardization of work and reduction of activity-times.
Yet these very assumptions run counter to the teachings of the originator of “Lean”, Taiichi Ohno. He taught systems thinking, through which managers study the way the system operates, to identify and understand what their actual problems are. Only through acting on the system and focusing on relationships and information flows, financial benefits would follow. Improved financial results are the by-product of identifying and resolving the problems in the system (Seddon, 2011) and (Seddon, 2015).
Cornerstones of business improvement activities
Before any improvement activities are initiated, fundamental clarity has to be established in the following dimensions:
● Ambition and implementation period
● Scope of processes / products
● Level of current performance, financial and operational parameters
Clarity regarding these points is imperative when starting business improvement activities. All too often “activism” and pressure to quickly make an impact are preventing the proper definition of target conditions and boundary parameters.
Mapping processes and activities are often brought to the forefront of initiatives. Value stream mapping per se however is not adding any value, yet it may help in understanding the overall situation or in generating new ideas. However, of much greater importance at the beginning of the improvement process is the wider understanding of the system, viz., the big picture of the situation the corporation finds itself in. This big-picture – system approach helps to avoid sub-optimization and direct detailed process level improvement activities mainly based on assumptions.
Even though the following 3-step approach is mainly intended for service organizations, I personally prefer the clarity and effectiveness resulting from it at the very initial stage especially for manufacturing companies (Bicheno, 2012) and (Seddon, 2005).
- Step 1 would be “clarify the system” – are we looking at a business critical core process or is this rather a supporting process environment?
- Step 2 is the “check” stage – is the definition of our system environment correct; have we detected all system issues; do we understand the system performance/capabilities, i.e. do we have supporting data?
- Step 3 is the “redefinition of the system boundaries” stage – what is to be included or excluded in the end-to-end consideration?
Upon completion of these elementary steps the outcomes should be:
- Improving action priorities – potentially there are fundamental issues or questions which need to be resolved prior to mapping value streams, i.e. should we even continue making particular products.
- Defining priorities of particular streams intended for future state design, i.e. products which have higher market or earning potentials. In this context fundamentally important is a sound understanding of the financial contributions per product.
Subsequent to the initiation steps there are a variety of topics which have to be addressed as well. In the following paragraphs these topics are further explicated:
- Demand analyses and management
- Target capacity utilization / productivity policies / constraints
- Lead time analyses / arrival variations
- Shipment frequency and on-time delivery performance
- Supply chain structure analyses
- Physical production system topology
- Organizational analyses
Demand analysis & management
The best and most detailed value stream mapping undertaken in a plant is meaningless without understanding demand of the particular mapped product flow. How is this so? Products do not flow on their own, only demand for a saleable product makes them flow. Flow itself is the result of the ability of an organization to classify and manage the various demand characteristics of the saleable products of a company.
Some Lean Manufacturing proponents believe that if production is solely based on actual customer demand at the point of consumption, inventory can be eliminated or at least reduced drastically. This operating model equates to a built-to-order process, which works well if the customer order lead-time is greater than the combined purchase, manufacturing, and distribution lead-time. This mode of thought deduces that understanding demand patterns is unnecessary as the production system is geared to deal with any type of demand.
In my view this is unrealistic, simply because any production environment has to deal with constraints or limitations, be it in production resources or supplier resources. Barriers exist in one or the other. The “magic” lies in how to manage the balance between demand and supply to enable the product to flow.
There are various concepts of demand classification. Analysis of demand patterns is an essential concept for Lean (Bicheno and Holweg, 2016). All concepts have common capabilities to identify repeatability and stability of demand where possible and to manage other demand categories appropriately. This fundamental thinking allows for optimization of leveled scheduling (Heijunka), a cornerstone of the Lean fundamentals: waste (Muda), overburden (Muri) and unevenness – variation (Mura).
Inherently Lean is only a leveled production process, ensuring that resources are not overburdened and demand is smoothed. For this very reason demand patterns have to be classified. Various analysis tools and techniques are available. Common to all of them is the thinking “manage what matters the most”, usually linked to a Pareto approach. Typical examples are “Glenday Sieve” or ABC/XYZ approaches (RRS – Runners, Repeaters and Strangers). More evolved is John Darlington’s FRED demand analysis, which allows detecting particular demand patterns. FRED is finer tuned to detect outliers, erratic, lumpy or management control demand patterns and is able to detect seasonality in demand as well (Bicheno and Holweg, 2016).
In the course of these analyses, an understanding of actual customer-driven demand will develop (gleaned from self-induced variation, data errors, etc.). On this basis the actual achieved customer order throughput time is recorded. Simultaneously the end-to-end time required to meet customer demand is determined. Here it is critical to note that process value stream mapping only will not divulge this total order lead-time accurately.
Capacity utilization, productivity policies and constraints
As I have highlighted above value streams, i.e. production systems have to be balanced between demand and supply (materials and capacity).
Whenever demand exceeds capacity, a queue (backlog) builds up. This backlog is only eliminated once capacity is higher than demand.
The Kingman Formula – Variation, Utilization, and Lead Time
This theorem developed by John Kingman in the 1960s, is describing an approximation of the mean queuing time in a system with limited capacities. Most important is the recognition of the exponential increase in queuing time (lead-time) with increasing capacity utilization. As the utilization increases, uncertainty escalates, i.e. lead-times become unpredictable.
Reflecting on this finding, prescribed high capacity utilization may appear totally arbitrary. Due to its impact such underlying policies or targets must be identified prior to any mapping activity. A consensus must be reached regarding the targeted levels and a balance must be struck between utilization (productivity) targets and targeted lead times.
As John Bicheno and Matthias Holweg rightly note: “Generally, if you want lower lead times then you must either have lower utilization (somewhat like Toyota), or lower process or order variation (again like Toyota). The higher your demand variation (see demand analysis) the longer will be your queues for any level of utilization” (Bicheno and Holweg, 2016).
In order to simulate the effects of variation on the utilization, and taking into consideration demand and capacity, a valuable instrument is a time-based capacity planning model. Such a model takes into account, timings (set-up, cycle, uptime), routings (operation steps, losses), demand (per individual product), resources (machinery/production equipment, shift patterns), shifts (available hours per day) and financials (throughput per product).
In load analysis, the time-based capacity planning model accurately demonstrates the level of utilization on the basis of demand and capacity, taking into account variations, i.e. quality (scrap) or machine breakdowns. Furthermore, the summarized times needed for changeovers are revealed.
As described in the following paragraph there is also another time-based variation namely so called arrival variation which is induced by erroneous master data parameters in the ERP system. This inherent variation confounds efforts to smoothing arrival demand.
In opposition to classical batch size calculation methods (EOQ and the like) a time-based capacity planning model allows a determination of the available leftover time based on the calculated load. This leftover time is to be considered as available for a deterministic number of changeovers. The minimum (approximate) batch size is calculated on the maximum numbers of changeovers possible within the available leftover time. In my calculations this rationale has proven to be working well, as it is only committing a batch size that protects the resource from turning into a bottleneck.
Lead-time analysis and variations
As most manufacturing companies operate ERP systems, the availability of accurate lead-time or other timing information is seemingly of no concern. Nonetheless, I am voicing a word of caution in this context. Unfortunately all too often such master data proofs have found to be incorrect at closer scrutiny. To recognize the true capabilities of an organization my suggestion is to apply the Little’s law queuing theorem. Most useful is the determination of the calculated lead-time of a value stream. The applied formula is an adaptation of WIP=Throughput x Lead- time, hence Lead-time = WIP / Throughput.
The definitions for these equations’ parameters are (Caroli, 2018):
- “Lead -time is the time between the initiation and delivery of a work item”
- “WIP – Work in Progress; the number of work items in the system. Work that has been
started, but not yet completed”
- “Throughput is the rate at which items are passing through the system”
The focus in this analysis is on the finished product and not its components. The resulting variation between quoted (recorded resp. planned) and calculated lead-times is a measure of execution effectiveness. In other words, if there is a large difference between these values the root causes have to be identified. Most often this root cause lies in the fact that planned lead-times are based on wrong procurement lead-times or routing timings in the ERP system, i.e. too optimistic (short), resulting in too early launching of production orders.
An examination of planning alerts (receipts resp. schedules in the past) may provide further insight into execution effectiveness (Bicheno and Holweg, 2016). The ultimate goal in this context must always be material planning and scheduling without arrears.
Shipment frequency and on-time delivery performance
Lacking adherence to shipment patterns resp. poor on-time delivery performance is an indication of process instability. Shipping schedules play a major part in the overall scheduling methodology, simply because shipping is an aggregator function in most manufacturing environments. Questions to be asked are: Do shipments follow a fixed schedule, i.e. particular pick-up day or are shipments only staged once shipping unit utilization thresholds are reached, i.e. for full truck or full container?
These parameters influence inventory holdings, i.e. buffer sizes and production scheduling activities. A further determinant is shipment content structures. Do they consist of large batches of single materials only or are shipments consolidated, i.e. consist of mixed loads of varying materials and batches? Before any mapping activities these data should be collected and verified.
Supply chain structure analysis
From a pure value stream perspective it may be considered as uncommon to analyze the core supply chain inter-dependencies at this preliminary stage. However, it must be accepted that almost all manufacturing organizations rely heavily on their supply chain, so much so that typical OEMs for, i.e. machinery, vehicles etc. purchase about 60-80% of their finished product content. Thus the processes of those content suppliers form an integral part of the OEM value stream.
In today’s business environment it must be recognized that value chains compete with one another but not individual companies. Martin Christopher appropriately notes that in traditional operations, management processes are optimized within one factory. Furthermore the notion persists that by linking these local optima, the global optimum is achieved at the supply chain level (Christopher, 1998). Tellingly the often cited “bullwhip-effect” is one of the proofs of how dysfunctional this assumption is. Thus it must be noted that:
“Supply chain capabilities are a significant determinant of competitiveness since the final product is not the sole achievement of the OEM, but the customer experience is co-determined by the supply chain in terms of quality, cost and delivery. A significant proportion of the value of the final product is generally sourced from suppliers. The performance of one tier in the supply chain is a function of the supply and distribution functions, i.e. surrounding tiers. In other words, the supply chain is only as strong as its weakest link, i.e. supplier” (Bicheno and Holweg, 2016).
Physical production system topology
In another initial analysis the physical process map is outlined to identify the major operations steps. The sequencing of these steps is then later used for process mapping and scheduling topics. From this simplified operations steps sequence it is possible to recognize principal flow types. These primary flow types are categorized as follows:
- V type: The downstream operations steps diverge from a single starting point resource.
- A type: The downstream operations steps converge into a single resource at the end.
- T type: Different finished products are built from common components. Common components are assembled on dedicated lines.
In complex manufacturing environments several of these primary flow types may be applied. In combination with this flow type mapping the organizational structure that should also be reviewed, will indicate if functions, roles and responsibilities are in alignment with the sequence of the operations (Bicheno and Holweg, 2016) and (Darlington, 2018).
In the light of the knowledge gained from the demand management analysis it is worthwhile to examine the organizational structure as well. Poorly aligned organizational structures not synchronized with operational sequences are often a handicap in focusing on the value stream of a product.
“Ideally a value stream should be a self-contained unit including scheduling, design, sales, quality, maintenance and accounting – a factory within a factory” (Bicheno and Holweg, 2016). Value streams could be organized according to demand patterns – products with “normal” demand patterns could be run on a separate value stream from products with “lumpy or erratic” behavior. Why should this be considered? Primary reasons for this are “simplicity of flow, clear aims and priorities, team-based identity and motivation, clear accountability” (Bicheno and Holweg, 2016).
A value stream focused organization may also mean less “monuments – massive automated machines”, but more cell-oriented manufacturing with simplistic machinery. The term “focused factory” was coined by Wickham Skinner of Harvard Business School in the 1970s. In his findings in the article “The focused factory” he concluded that “focused factories” far outperformed mixed value stream manufacturing plants (Skinner, 1974). Besides stream alignment according to demand classification, value streams could also be organized in various other ways, i.e. according to technical similarities, major customers (Brumme et al., 2015).
Value stream mapping – the merits and pitfalls
When “Lean Thinking” – the blockbuster volume on “how to implement Lean” reached the bookstores, the undertone in the book in principle was “just do it”. The authors of “Lean Thinking” (Womack and Jones, 2003), were quick to notice that this “just do it” approach had its flaws.
As in any transformation process, sticking to a prescribed formula is no easy feat. In their step-by-step transformation process description in chapter 11 of “Lean Thinking” (Womack and Jones, 2003), most overlooked is step 4: “Map the entire value stream for all of your product families”.
In “Learning to see” (Rother and Shook, 1999) the authors Mike Rother and John Shook picked-up on exactly this weak point of “Lean Thinking” and presented a guidebook on mapping of the value stream. Subsequently, comprehensive VSM reference books that were published picked-up on this rationale, including the recommendable standard work “Mapping the Total Value Stream” (Nash and Poling, 2008).
Value stream mapping, abbreviated VSM, is a technique used to define and optimize the various steps involved in completing a product or a service from start to finish. The application of VSM is also referred to as visualization of all steps in a work process. If done properly a value stream map not only takes into account the product inherent activities, but it also supports information flow and management processes.
Fundamentally there are minimum two mapping sequences: the current state and the ideal state mapping. Often the ideal state may be considered improbable, thus a future (in-between) state of a particular value stream may be mapped as well.
The VSM process is ideally conducted in the following order:
- Process selection: Ideally VSM is done on a suitable “aggregation” level, i.e. by product / product family, distribution channel or other particular business segmentation. Mapping everything that is going on in a business generally is not very helpful, such maps are overly complicated. Besides the physical flow mapping, most crucially the information flow has to be mapped. Commonly recommended is to commence with the physical flow first, beginning with the distribution of the final product and working upstream to the start of the process. However, as practical examples have taught me, depicting the information flow is often much more challenging and crucial overall to the business. As VSM combines the information flow into the physical goods flow process map I would recommend that one starts mapping with the important, interlinking information flow elements, as this defines the breadth and depth of a process to a great extent.
- Data collection: A further challenge in any mapping process is the collection of process data. Data quality and accuracy are crucial in order to derive significance in the determination of process and resource constraints.
As with any tool or technique VSM has undoubtedly advantages as well as disadvantages. VSM is not a catch-all multi-purpose tool that solves all process problems. It must be said that any issue not connected to the material or information flow is unlikely to benefit from a value stream map.
VSM is suited to relate manufacturing processes to its supply chains and distribution channels. It also allows the integration of material and information flows. In this context it links production control and scheduling functions, i.e. production planning and shop floor control using operating parameters such as cycle and changeover timings. This information stems from routings or actual time recordings and forms the basis for implementation of time based- capacity modeling by designing the production system based on the complete end-to-end flow time for a product family.
Further it provides a company with a “blueprint” for strategic planning to deploy the principles of Lean Thinking for their transformation into a Lean Enterprise.
Limitations of value stream mapping
So far so good, but VSM done in isolation will not produce meaningful results in a variety of scenarios. VSM might fail outright in scenarios with multiple products with no identical material flow maps. It also lacks economic measures for “value” (ex. profit, throughput, operating costs and inventory expenses). It is further inadequate in the display of the facility layout and its spatial structure. Furthermore it is unable to show the impact on WIP, order throughput and operating expenses of inefficient material flows in the facility, i.e. backtracking, crisscross flows, “non-sequential flows, large inter-operation travel distances and how that impacts inter-operation material handling delays, the sequence in which batches enter the queue formed at each processing step in a stream, container sizes, trip frequencies between operations, queuing delays, sequencing rules for multiple orders and capacity constraints” (Irani and Zhou, 2018).
When dealing with complex, multi-tier product BoMs with multi-level Operation process charts VSM becomes confusing. A further limitation is the bias on high volume, low variety manufacturing systems. Systems that typically favor assembly line setups, which are geared for continuous flow. In conjunction with this it fails to consider the allocations and utilization of shop floor space, for WIP storage, material handling and production support.
Conclusion – to understand and to change
“Value stream mapping is a technical tool that examines the physical system, processes and interconnections” (Strategosinc.com, 2018). Equally but critically important for the success of any business improvement initiative is the human resources. Production environments are complex socio-technical systems that require an integrated, system thinking approach, as recognized earlier in this assignment.
Besides technical aspects any improvement initiative has to foster a culture of highly motivated teamwork for coordination and problem solving. A corporate and improvement program leadership is required to effectively mobilize the collective intelligence of the organization. As an example: The Industrial Revolution and the rapid progress of science are a result of close interplay of theoretical and practical advances.
In today’s hyper-competitive environment, corporations are in constant search for performance improvements. It is an unresolved puzzle still as to why Lean techniques have not spread more quickly through industries: Few succeed, though many try. What is lacking I believe is an overall framework, allowing one to take the “birds eye” perspective.
In my point of view, system thinking is the missing link in this equation. System thinking could significantly contribute to help one understand the overarching thinking models, decision loops, behavior patterns and system structures.
Lean techniques are promising elements in helping to reduce waste, but only in combination with systems thinking will they merge into very powerful symbiotic processes. The interdependence of systems thinking and Lean approaches offer many potentials as highlighted in this assignment.
I believe that the mutual interdependence of systems thinking and Lean offer a true opportunity here. “By recognizing the synergies between these two fields, we can drastically increase our capacity” (Ballé, n.d.) and ability to improve the corporation’s performance. Such change can come from only within an organization by understanding the interactions of the wider system, the behavior of the human element.
“To paraphrase Karl Marx, the point is not merely to understand the world, but to change it. Systems thinking offers the means to understand; Lean, provides the opportunities to practice change. By pursuing both jointly, we can learn faster how to change the world in the right way to face our global challenges” (Ballé, n.d.).https://thesystemsthinker.com/%EF%BB%BFwhat-is-the-relationship-between-systems-thinking-and-lean/