Flow efficiency – part 2/5 – VSM made on production and projects

Today we are digging into production and product development with waterfall way of working regarding Value Stream Mapping, VSM.

As stated in the last blog post, VSM looks at waste as the non-value adding activities in a value stream between clearly stated start and end points. And it is important to state activities, not the actual processes, where the activities are done. This is extremely important to understand, since it is vital for product development, that is never repeating an activity twice but rather the process for working with the activity is repeated. This is diametrically opposed to production, where there is a repetitive pattern. In production for every process, the activity (WHAT) in the process generates the work to be done on the activity (HOW). In product development the WHAT never generates exactly the same HOW to work with the WHAT, meaning we can only specify a high level process. For product development it then becomes clear that we need the actual activity not only a process taking care of an activity, to also get its real length in the value stream.

So, what also need to be taken into account and which requires some elaboration here, is when we are not breaking down our WHAT according to our made architecture*. This happens for example when we are dividing our WHAT into end to end features in software, which means that we need to understand HOW we can break down the solution in smaller parts. This means that we actually have two different HOW, one on the solution that is needed in order to find the smaller WHAT and another HOW that is our HOW to do the actual work. The understanding of the two different HOW, is vital in order to understand that when doing features, the dividing of the feature into smaller pieces will affect the HOW to do the actual work, especially also when we not have the customer need in full control. Here below follows a thorough elaboration supported by pictures on the difference between not only production and product development, but also between hardware and software development regarding the WHAT, the HOW to divide the WHAT into smaller activities and the HOW to do the actual work.

In product development, regardless if it is hardware or software, we make an architecture, that consists of subsystems, sub-subsystems, etc., depending on the size of our product. The reason why we are making an architecture, is the same reason for us having a line structure of the organisation, a working structure of the organisation and a planning structure of the work we are doing. We simply structure our reality of work to be done into smaller parts, in order to reduce the complexity.

Hardware product development
In hardware product development the experiments and the innovations are done before the actual project starts, or multiple concepts (with one concept that will be successful) are run within the project. Multiple concepts, to mitigate the risk with complexity around customer need, are common for the industrial design team in the early part of the project. By this way of working, the prediction for the delivery will be high, since the project has one secure path to a successful delivery that only require exploitational development with some pre-planned prototypes. Early prototyping for industrial design and the appearance of the porduct, is not new and has been part of Lean Product Development for a long time, even though it already in the 1970s was part of the way of working for many companies.
Since it is impossible to deliver features to the customer in hardware product development**, working with I-shaped teams having responsibility for their subsystem in the architecure, has always been the natural and successful way of working for hardware projects. This means that the WHAT, easily can be divided into smaller parts, with no limitations all the way down to team level, regarding the HOW the solution will look like. By doing a systems design and reducing the complexity with our architecture, we have already taken into account HOW the solution will look like on the wholeness when breaking down the functions into smaller parts. HOW to do the actual work in each subsystem is up to respective I-shaped team, but of course also all the interdependencies need to be taken care of. This also means that it is easy to measure how long time a team needs for the solution of their WHAT, which also means that is possible to measure, with very low risk for sub-optimisation or gaming the measurement. Here is a picture that explains this further, and here as a pdf file, traditionally dividing of the work:
Hardware manufacturing
The architecture done in the hardware product development, is of course valid also when manufacturing the product. The total product (the total WHAT) is easily divided into smaller parts, down to the bottom level processes, which are standardised and stable. These processes are aggregating their work with the help of the architecture, by finding the best way of aggregating the parts together, which of course has been possible by a close cooperation between production development and the manufacturing departments of the organisation. The picture looks almost identical to the one above regarding hardware product development, the only difference is that aggregation of the parts is done in manufacturing and that integration of the parts is done in hardware product development. A good enough explanation of the difference between aggregation and integration is; in aggregation you can be sure that putting the parts together will work, but in integration you can be sure that it will not work. Here is the picture for production, and here as the pdf file, production dividing of the work.
Software Product development
In software product development it is of course possible to work with I-shaped teams too, each one responsible for a subsystem’s functions. This can be suitable when we low customer interaction, for example when the hardware technology is the most important for the customer, not the software, or when we have software functions, like rules, calculations, etc.  within our product.
But, in software product development, we  also have the possibility to deliver the software incrementally by delivering features to the customer, which is often preferrable. This is done iteratively within an increment, giving us early feedback so we can adjust our work due to (possibly) changed customer needs. This early feedback gives us the possibility not only to make the product that our customer really needs, but also delivering the product as early as possible. Making features and their smaller parts, means that we instead of working with I-shaped teams responsible for the functions within a part of the architecture, are working end to end cross-functional all over the organisation and the architecture, GUI, etc. with T-shaped teams doing features.
But making features instead, means that even though we have an architecture that make it easy to divide the WHAT to do into smaller parts and not affecting HOW the solution will look like, we cannot take advantage of this anymore. We have put the complexity back into the loop when we disregard our architecture, meaning that dividing the feature into smaller pieces can therefore never be done in advance within an increment. The parts will instead successively change every iteration when we are exploring (not exploiting) our way to the solution and this even if we already know exactly what our customer needs are. So, we actually have two different complexities to handle within software development. One regarding how to make the WHAT (feature) smaller and one about understanding our customer needs, which both makes it impossible to divide the feature into pre-defined smaller pieces. This of course leads to that prediction and therefore measuring on pieces on a lower level than feature level, is impossible. Here is a picture trying to show this, and here is the pdf file, non-architectural dividing of work – new:
Note also that combinations of way of working is possible, where some teams are working with features and user stories, but where other teams can work with solving complexity for a regulator algorithm with many unknown parameters, or work with UX or industrial design. Theses teams does then not work within the takt time of the other teams, since the length of the iterations can be very short and of different length. Instead it is the Integration Events between the teams that are the synchronisation, but where of course the follow-up can be done at the same length as the sprint, or continuously. This is an important matter, since some parts of the software should or cannot be divided into parts. For hardware this is self-evident, you do not make the motor of the car by doing features, it is simply a subsystem consisting of many sub-subsystems etc. The same goes for the regulator algorithm, some value rules, and others; to break them down into features and user stories, would give a tremendous dose of spaghetti coding and the loss of the wholeness. Of course features or user stories can call the regulator algorithm function/subsystem, and also use dummies if the function/subsystem is not ready yet.
Having also standardised functions/subsystems (that can be called from features and user stories), was brought up by Brad Cox, author of Object-Oriented Programming, in the article “Planning the Software Industrial Revolution”, stated already three decades ago; “The possibility of a software industrial revolution, in which programmers stop coding everything from scratch and begin assembling applications from well-stocked catalogs of reusable software components, is an enduring dream that continues to elude our grasp…” [1].

With clear well-specified I/F, a module can easily be reused, as Brad Cox is pointing at, which significantly will reduce cost for further development, not to mention time to market.
Features is more or less going the other way compared to SW modules, but features/user stories and modules can clearly be intertwined, with basic SW modules that are easily called from features and user stories.

Back now to the activities in the value stream mapping. These activities are also normally on the Critical Path, which has been generated transdisciplinary and iteratively with the project team and its sub-teams, common experts, stakeholders, centralised resources etc., i.e. taking also resource constraints for the total organisation into account.

When activities are on the Critical Path, it means that if they are delayed, the product or service is delayed. The Critical Path can be viewed on different levels; from the project as one activity, down to a detailed level where the activities for the teams or even individuals can be seen. The Critical Path is the key for doing Flow Efficiency improvements of the total product or service.

If the Critical Path cannot be presented, the focus is definitely not on Flow Efficiency.

VSM made on knowledge work, need to focus on the activities on the Critical Path in the same way as for production as stated above. This means that we can do VSM on the time plan, one of the outputs from the Project Processes**, since there we have all the activities for the project, with the needed time box per activity.

VSM was as stated before from the beginning used for production which is aggregating (no interdependencies) to achieve the product, but after that it was broadened to be used also for knowledge work with projects.

But, for project work this can be very tricky, because project work is integrating at Integration Events, with also a lot of interdependencies between the teams and to experts, between the Integration Events. And remember that the activities are very different from each other and seldom exactly the same, not even between two similar projects This of course means that the focus is mainly on the waiting times, which is the same as in production/service/office. Since project work also is a creative process, it is also very difficult from the beginning to say what will be non-value adding work in our activities.

And as stated before, when we find a problem, a pain point, we must always start to ask why until we find the real root cause. And remember that multiple symptoms may end up in only one root cause, so all pain points found will most probably end up only in a few root causes.

If we have not asked multiple questions of why we are doing something, we are most probably sub-optimising our organisation.

This maps well with the ambiguity among thought leaders, where some of them recommends to make a VSM on the Project Processes where others state the impossibility to make a VSM in a complex system. So, it is ambiguity if VSM should be made on projects or not.

From the above we know that when we do a VSM with its Flow Efficiency calculations in knowledge work, we must do as follows:

  • do it from the outcome of the Project Processes, the activities in the time plan, and not on the Project Processes themselves. And do it in the beginning of the project, and keep track of the Critical Path and especially its waiting times.
  • and to avoid sub-optimising, we cannot go down on a too detailed level, but to only see the value flow has many times value for the organisation
  • do an analysis on the pain points found in the value flow, with the help of the Prefilled Problem Picture Analysis Map, but also other known pain points in the organisation, and always ask multiple why, to find the real root causes.

    In this way we get a good view of the value flow in the total organisation, and we can improve the system by taking care of the root causes, with no risk of sub-optimising.

In the next blog post we will continue with Agile teams and later also their flow calculations, Process Cycle Efficiency, that is widely used. And it means that it is time for some valuable insights since we so far have learnt a lot about what system collaboration really is. And these calculations as well put a treacherous constraint on the teams, since the constraint is not explicit. Thrilling, right? C u then.


*if there is no legacy architecture, a new architecture prepared for the coming features, may give a smoother way when breaking down the features into user stories, or with the help of micro services. Remind also that Conway’s law will add communication structure into the software code, depending on the team’s closeness to each other.

**a platform with modules, where the modules are updated incrementally, is the closest hardware product development can come features, even if it is not exactly the same.

***VSM made directly on for example the Project management processes themselves, valid for planning, follow-up, etc. of the project, will probably only have very small effect, i.e. it is the wrong focus area. We have also the Product-oriented processes that specify on a more high-level how to take care of some of the activities in the time plan.
But, since knowledge work is not repetitive and all the interdependencies cannot be seen in the processes, it is treacherous to try to make VSM on the processes directly and the risk for sub-optimising is very high when there is no Big Picture that can be seen either.

[1] Cox, Brad J, “Planning the Software Industrial Revolution”, November 1990, IEEE Software Magazine. Link copied 2019-01-29.

Leave a Reply