Have you succeeded in avoiding Big Bang, but got stuck in the noxious Gig Bang integration testing instead?

To understand why we historically got something called Big Bang integration testing, we need to start somewhat earlier, namely with the, in software development often disreputable Waterfall method.

The Waterfall method was originally used in hardware projects, where the product during the early systems design phase, is divided into modules (often directly mapped to the subsystems in the architecture) with their own functional and non-functional requirements. These modules are designed not only to fit together at the later integration phase to make a full integrated system, but also to fulfill the functional and non-functional system requirements of the system as a whole. This i done during the system verification phase. The aim of the systems design phase is to reduce the complexity, and can be done iteratively also on big modules if necessary.

To avoid a Big Bang integration testing (incl. verification) of the full system, hardware projects have since more than half a century used prototypes for the first integration events and of course also for earlier integration testing between modules with (new) complicated interfaces and relationships. The reason for doing this is that even though a systems design has been done properly, knowledge need to be gained in order to continually reduce the complexity. Trying to make the perfect specification is due to complexity not possible, at least not with reasonable time and money spent. Better is to use a few planned prototypes. The more novel the product is, the more knowledge need to be gained and the more prototyping needed.

Even better at reducing complexity than the prototype thinking in the Waterfall method, is Lean Product Development. Lean Product Development for hardware (from Toyota, and many other Japanese companies too) takes this a step further, especially when making modular platforms. This is achieved by early experimentations using techniques like Set-Based Design, Multiple Concepts and Model-Based Systems Engineering (MBSE). It is frankly just about reducing the complexity of the wholeness, or the Big Picture of the system product you are planning to build, by gaining new knowledge, and to do that as early as possible.

So, the question is why on earth there is a phenomenon called Big Bang integration testing for us in software development, since hardware development seemed to have worked it out on how to reduce complexity? The reasons for that are for sure plenty, like the many hardware and software differences, but today we will focus on what we in software development have done trying to avoid Big Bang integration testing.

By just looking at the mere number of different integration testing strategies for software development, we can see that we have not been lazy. We have Big Bang, Top-down, Bottom-up, sandwich, incremental, functional incremental and so on. We can divide them into two different types of work package strategies, one is modules, and the other is incremental functionality with end-to-end customer value like in agile software development. When we see modules, we can understand that some kind of systems design have been made like in hardware, and many times the modules are mapped directly to subsystems in the architecture. Due to the module hierarchy, we can also see that the systems design is made top-down, which means that it is scalable no matter size, even though a big system of course has more complexity, which may require more early experimentation and not only prototypes.

In Agile development our architecture is top-down as well, which means that size does not matter. Of course, we can also have an agile architecture strategy, where we later fill in the details in the architectural subsystems with micro services for example, or we can even have a strategy where we are adding architectural subsystems gradually, but the architecture is still top-down, i.e., scalable. But, what about the end-to-end incremental work packages, not corresponding to subsystems in the architecture, are they scalable?

If we have a smaller software product, taken care of by one or a few teams, like in a small company, the teams have full control over not only the architecture, but also the overall functional and non-functional requirements on the product, so they can do the systems design, the system integration and the system testing. But what can happen when we are a big company, with a big product, and we have started our agile journey with a few agile teams, that now have shown to be successful, and it is time to scale to all software development teams?

The common way is to do a bottom-up scaling, when we are adding more agile teams, meaning that we are neglecting the wholeness and the top-down approach for reducing complexity on the whole. The teams will then work with functionality in the product that has not been part of any systems design, so we are not talking about lack of trust to our great agile teams, they simply have not got the right prerequisites since no one has taken care of the whole first. The risk is then high that the teams are doing their own gig with functionality that is too loosely coupled, meaning that we are unable to fulfil the requirements of the whole.

If the interconnectivity within our system product only had the complexity level of a jigsaw puzzle, only a top-down made architecture would be fine. But, the parts in software products are instead deeply interconnected and related, where the butterfly effect is a good example of what can happen when the systems design has been neglected. To avoid extremely high risk taking, the systems design is therefore always important to start with, no matter of number of teams involved or size of the product, in order to reach enough details in comparison with the uncertainty of customer acceptance of the product.

This means that without a proper systems design, we are only making a false integration of the parts to the whole, i.e., we are putting the code together by aggregation to a whole. And it does not matter if the parts themselves have a high-quality level.

Beware of that Continuous Integration will not help us here, since only the parts one by one have built-in quality due to a thoroughly systems design per part by the teams. But, that does not at all mean that we have done systems design on the whole, also meaning that the whole system product will not have built-in quality. We have escaped the Big Bang integration testing with modules, but instead ended up in Gig Bang integration testing, by practicing false integration in our desire to achieve Continuous Integration.

So, our common answer to avoid Big Bang integration testing for big software products, is definitely not to only make things smaller and smaller and with some Bottom-Up magic think that the complexity has vanished when we combine the parts to a whole again. No, what happens is only that we have taken gigantic (another apt gig 😉 ) risks, since with the absence of systems design, we really have no clue if the parts will fit, especially not regarding non-functional requirements like performance, error handling, traceability, security, etc.

This also means that the Built-in quality strategy that is only focusing on the parts or features that the teams build, is a naïve strategy to get Built-in quality on the whole. And no Continuous Improvement (Lean term: Kaizen) in the world can help us, since it is problem-solving on the whole (Lean term: Jidoka) that is needed. The reason is that we have introduced a lot of symptoms when we neglected the systems design and the root cause is of course to fix the systems design. Continuous Improvement is only a method to improve a standalone, independent, stable and standardized process, not to solve symptoms, which also is impossible, since we then will have the butterfly effect again.

So, now when have we seen many different attempts that we have done in agile software development, the first step is to understand why we needed to make the attempts since hardware did not need that in the first place. Context dependence is king as always in complex systems and we can also see the necessity of top-down thinking, when the complexity is increasing, so we need to find out more about them both and their relationship. The second step is then to see if we can find an overall method, that with some different flavours can cover different contexts, which is of course easier said than done*, but we need to give it a perseverant try.

The coming blog posts will dig further into this matter, since we really need to do something about it. And that will most probably mean that we need to change our patterns of thought, so we can think anew to be able to act anew.  Before that some other important and related blog posts about verification and validation, hypothesis, innovations, since they are key to understand before digging deeper into what we can do about it.

C u soon again.

 

*there are tremendous many parameters to take care of that sets the context like; complexity, uncertainty, size of product, size of organisation, competence, experience, back-end, UX, etc.