TDSD – Test-Driven Systems Design – Systems design and systems test

From series of articles about the deductions concluded in System Collaboration, we could read about the necessity of iteratively reducing the transdisciplinary complexity/ complicatedness, especially when developing completely new products or platforms. This is what we normally talk about as systems design. Included in the systems design is also the importance of first making the test cases for our product, so we know what we are designing for, and where the systems test continuously will give us the answer if our systems design will do the job. This is the heart of our test-driven systems design thinking, which gives a proper method for doing the systems design. With a test-driven systems design, we will significantly reduce the risk for a big failure in the end, in the case of Big Bang Integration where prototypes have been neglected in waterfall software projects, or in the case of Gig Bang integration due to no systems test on the whole in a scaled agile way of working. Because, with a well-thought-out systemic systems design, we can instead achieve a united and well-functioning whole, a coherent and cohesive whole, that in the end can be released to the customer, preferably in increments valuable for the customer, if possible. This means a structured way in order to iteratively reduce the whole system into its parts, i.e., like prototyping that reduces transdisciplinary complicatedness, but where TDSD takes this one step further, being able to reduce also transdisciplinary complexity for completely new products and platforms.

The first step in TDSD is to start analysing the requirements, both functional and non-functional requirements, to be able to come up with a hypothesis on how we can find the needed test cases for our whole system, and how to divide this unknown solution on the whole into parts, which will be our first abstraction level. This is then repeated in an iterative manner, if necessary, depending on the size of our system, to find the right granularity of the sub-systems (components). For every abstraction level, from the highest to the lowest level, the transdisciplinary complexity is reduced by systems design for every component (sub-system) on the abstraction level, which totally will make the total architecture for our product. The systems design is not only about dividing into components and WHAT each component will do, but also most important, HOW they will interact. The latter step is extremely important and is where all the non-functional requirements on the product are taken care of, so that each component also derives its respective non-functional and functional requirements from the outcome of this step. Systems design is all together very tricky, and at least a few iterations are most probably needed. Here is a picture describing the scenario with TDSDs iterative approach, as well as intertwined, for iteratively making the architecture with TDSD.

This picture is a refinement of the picture in this System Collaboration Deduction article, where the need of systems-designed architecture is deduced.

From the test cases, the systems test development activities, for example performance tests with Load & Stability, can also be started, so that this blueprint, this architectural skeleton we are trying to achieve, can be verified as soon as possible, i.e., another important part of the concept of test-driven systems design. For some domains where the software is the actual product, the answer time is critical for survival, where the first impression is everything, which means that time is more important than a proper systems design. But, for other domains like finance, banks, insurance companies and governments, the software is only supporting the product, which gives the users some more patience. Without this latter step of HOW the components will interact mentioned above, we will not get an architecture artifact, but instead a sketch artifact*, since we have not handled the non-functional requirements correct, see this blog post for a deep-dive.
Note that every sub-system (component), no matter size in the architecture, can be seen as a black box, that is to be developed, integrated, verified and validated before delivery. The team (or team of teams) always need to do these steps, which means that the teams need to be both I-shaped and T-shaped to some degree. This leads to that we need to focus on that the team secure that the component (or parts of it) is tested at delivery, and never that any x-shape is better than the other.
Note also that the UX or industrial design, will be one component in the architecture, and is where we have the biggest uncertainty, since we do not know exactly how our user interface will look like. In the beginning of reducing this uncertainty, is when we need to gain the most knowledge to understand how the user interface need to look like, meaning also that our iterations also need to be the shortest, in order to get fast feedback.

A warning here is apt for agile software product development nowadays, where the risk is high, that step of HOW the components will interact, is being neglected on the highest level in the way of working. This means that our architecture instead is only a sketch artifact, like a big jigsaw puzzle, which is treacherous when we are developing big systems. Especially if the organization is making a bottom-up transformation when going towards agile at scale and big systems, since the teams then will start doing some easy maintenance, or be working only on parts of minor new functional requirements of, in both cases, an already existing product. This means that when making a novel product, and no systems design on the top-level is done, these parts are only containers (value streams) with functional requirements within a sketch, and not real software components within a proper systems-designed architecture of the total product. Often in agile development, the terms emergent architecture/design and refactoring are mentioned. But somewhere, depending on for example the size of a system (and a system that do not require a high level of security), there will be an invisible line between where refactoring can be done in order to restructure the architecture and the code, and where it cannot. As long as refactoring can be done, we can also talk about the possibility of an emergent architecture and emergent design, but when the system is bigger than a need of a few teams, an emergent architecture and emergent design will exponentially increase the risk. And we can also add that for big systems we most often know WHAT system we need to do, which means that the uncertainty is very low. This goes example for retail, banks, insurance and other financial companies and also governments, so why should we then even take this kind of risk on HOW to do our system. Without doing a proper systems design, the HOW, we also increase the risk exponentially regarding our ability for Requirements Traceability – RT.

RT in short means that we need to keep track of the initial ideas, via the requirements and the systems design, all the way down to the realisation of the system and the possibility to go backwards as well. We can clearly see that there will be levels of different kinds of artifacts, and we can also see the need of a top-down approach to keep track of these levels. RT is always important to consider regarding systems, especially big systems, due to their higher complexity, which in turn requires better overview, more structure and order. RT is even more important when a new system is developed from scratch. The more security requirements the system have, for example legal, risk and compliance in bank systems, and that goes for any size of the system, the importance of RT of course rises as well.
The neglection of making a systems design, which in turn leads to an improper RT, is instead a top risk for not making the product right. This is for big systems never interchangeable with having flexibility to make the right product, since no one can take the big risk of doing a big product, without having customers. This becomes even more obvious when looking at all aspects of achieving a good way of working, where the foundation that TDSD is built on, can be found in System Collaboration Deductions. See also this blog post for a real deep-dive regarding RT, covering topics like communication, documented information, metadata, version control, revision, baseline, tags and labels, configuration management and many more important correlated subjects.

Putting everything together about systems design, we end up with this picture, the complexity of product development, showing the complexity of product development, and the need of systems designing the products in any domain for product development. As we can see in the picture, finding the solution to the new product, is iteratively done with the belonging to integrations, verifications and validations of the parts of the architecture, at timely IEs – Integration Events.

Next article in the series about TDSD, is about the importance to always plan each of the initiatives in product development, to avoid sub-optimization in the organization.

 

*A common worry today is that our organisations are viewed with the eyes of the engineering metaphor. This means that first the problem to be solved, is reduced into smaller parts to be solved and then the result is aggregated together to a whole, see this blog post by Dave Snowden for more details about this problematic thinking. The worry is at least twofold. One is that the reduction goes into too small parts because of wrong reasons a) to try to reduce complexity and b) “less (small) is more”, making too small activities in the ordered domains, that results in both low flow efficiency and bad resource efficiency, which is both inefficient and ineffective, where b) also making us lose the Big Picture. The other is that aggregation is only possible when we know that the parts fit together, for example in production. Aggregation can only be used by production, which means the Clear domain, and aggregation is a subset of integration. Even though we have done a proper systems design for our product development initiative, we do not know if the parts will fit together at our integration event, which is the reason for using prototypes.  If we have not done a systems design in product development, we are actually doing a false integration, and not a proper integration, since we have not even tried to reduce the complexity. This leads to that we will get a sketch artifact and not a proper architecture artifact.

Next article in the TDSD series is about the need of a portfolio team, which is the start for the implementation of the different virtual delivery teams needed, where the different team constellations strongly depend on the domain knowledge within the different individuals and teams in the organization.

Leave a Reply