The SOSD method – Systems design

As said earlier, systems design is done in order to reduce the transdisciplinary complexity/complicatedness. This is iteratively done with the belonging to integrations, verifications and validations of the parts of the architecture, at timely IEs – Integration Events. All this to reduce the risk for a big failure in the end, and instead achieve a united and well-functioning whole, a coherent and cohesive whole that can be released to the customer in the end, preferably in increments valuable for the customer, if possible. This means a structured way in order to reduce the whole system into its parts, often with the need of prototypes. Here is an attempt to explain the difficulty with systems design:
The specified requirements on the respective parts, to be able to fulfil their respective functionality, as well as their respective non-functionality, i.e., the parts’ interactions to become a unified and well-functioning whole, all in order to fulfil the specified functional and non-functional requirements on the whole, can never be set in advance. This is what is meant with transdisciplinary (integrative) complexity/ complicatedness; we cannot specify the solution of the respective parts from start. Instead, we need to integrate the parts (from a well-informed hypothesis) to a whole, and verify and validate the whole, in order to see if our assumption or hypothesis, the systems design on the whole including the parts, were correct. And regarding this reduction of transdisciplinary complexity/ complicatedness, there is no difference between hardware and software. Reducing transdisciplinary complexity/ complicatedness often means prototypes in order to gain new knowledge from the tests on the integrated whole, until we have all the knowledge needed for developing our product. This is very different from disciplinary complexity, which is about digging down into a discipline in order find new science or new technology, and therefore mostly is a part of the way of working for hardware. This is normally called research, where we do not even know if we will find what we are looking for, and if we find it, we will not know when.

So, we can never fully specify the requirements for the parts in a highly transdisciplinary complex context, since it is impossible to have that knowledge in advance. This is what Alicia Juarrero mentions as new cohesiveness. This is why all new product development, no matter domain, is highly transdisciplinary complex. Simply, we can only achieve new knowledge at the integrations of the parts to a whole, including the tests of the whole, since we can never specify us out of any complexity. This is why prototypes are self-evident in hardware product development, an integral part for us humans, since probably hundreds of thousands of years, in “hard” domains when reducing transdisciplinary (complexity/) complicatedness. As well as that if we are developing a completely new product today, we need a lot of experimentation, but from well-reasoned hypotheses, to gain new knowledge, so we can reduce transdisciplinary complexity, before starting the initiative. Changes on an existing product or variant of it, only need exploitation which means reducing transdisciplinary complicatedness, i.e., the results from one or two prototypes analysed by experts, will give us the needed knowledge so we can do the necessary final updates, integrate, verify and validate to a unified and well-functioning whole. But, when we for example are combining different existing monoliths (disciplines) trying to a common modular platform instead, we only have the disciplinary science and technology for each monolith. The new platform on the other side, is a novel thing that has high transdisciplinary complexity to reduce when we combine especially all the non-functional requirements, so we need novel knowledge we do not have. This is why systems design is key when we are developing new products in any domain; software, hardware, new (architectures for) buildings, bridges, etc.

We start the systems design, by first analysing of the requirements, to able to come up with a hypothesis on how we can divide the unknown solution on the whole into parts, our first abstraction level. This is then repeated in an iterative manner, if necessary, depending on the size of our system, to find the right granularity of components. For every abstraction level, from the highest to the lowest level, the transdisciplinary complexity is reduced by a systems design for every component on the abstraction level, which is also the start of the architecture of the product. The systems design is not only about dividing into components and What each component will do, but also most important, HOW they will interact. The latter step is extremely important and is where all the non-functional requirements on the product are taken care of, so that each component also derives its respective non-functional and functional requirements from the outcome of this step. Systems design is all together very tricky, and many iterations are most probably needed. Preferably, the system test development activities, for example performance tests with Load & Stability, are also started, so that this blueprint, this architectural skeleton we are trying to achieve, can be verified as soon as possible. For some domains where the software is the product, the answer time is critical for survival, where the first impression is everything. But, for other domains like finance, banks, insurance companies and governments, the software is only supporting the product, which gives the users some more patience. Without this latter step of HOW the components will interact mentioned above, we will not get an architecture artifact, but instead a sketch artifact*, since we have not handled the non-functional requirements correct, see this blog post for a deep-dive.
Note that every component, no matter size in the architecture, can be seen as a black box, that is to be developed, integrated, verified and validated before delivery. The team (or team of teams) always need to do these steps, which means that the teams need to be both I-shaped and T-shaped to some degree. This leads to that we need to focus on that the team secure that the component (or parts of it) is tested at delivery, and never that any x-shape is better than the other.
Note also that the UX or industrial design, will be one component in the architecture, and is where we have the biggest uncertainty, since we do not know exactly how our user interface will look like. In the beginning of reducing this uncertainty, is when we need to gain the most knowledge to understand how the user interface need to look like, meaning also that our iterations also need to be the shortest, in order to get fast feedback.

A warning here is apt for agile software product development nowadays, where the risk is high, that step of HOW the components will interact, is being neglected on the highest level in the way of working, meaning we only get a sketch artifact. Especially if the organization is making a bottom-up transformation when going towards agile at scale and big systems, since the teams then will start doing some easy maintenance, or be working only on parts of the functional requirements of the already existing product. This means that when making a novel product, and no systems design on the top-level is done, these parts are only containers (value streams) with functional requirements within a sketch, and not real software components within a real architecture of the total product. Often in agile development, the terms emergent architecture/design and refactoring are mentioned. But somewhere, depending on for example the size of a system (and that do not require a high level of security), there will be an invisible line between where refactoring can be done in order to restructure the architecture and the code, and where it cannot. As long as refactoring can be done, we can also talk about the possibility of an emergent architecture and emergent design, but when the system is too big, an emergent architecture and emergent design will exponentially increase the risk. And for big systems we most often know WHAT system we need to do, so why should we then even take this kind of risk on HOW to do our system. Without doing a proper systems design, the HOW, we also increase the risk exponentially regarding our ability for requirements traceability – RT.

RT in short means that we need to keep track of the initial ideas, via the requirements and the systems design, all the way down to the realisation of the system and the possibility to go backwards as well. We can clearly see that there will be levels of different kinds of artifacts, and we can also see the need of a top-down approach to keep track of these levels. RT is always important to consider regarding systems, especially big systems, due to their higher complexity, which in turn requires better overview, more structure and order. RT is even more important when a new system is developed from scratch. The more security requirements the system have, for example legal, risk and compliance in bank systems, and that goes for any size of the system, the importance of the RT rises as well.

The neglection of making a systems design, which in turn leads to an improper RT, is instead a top risk for not making the product right. This is for big systems never interchangeable with having flexibility to make the right product, since no one can take the big risk of doing a big product, without having customers. This becomes even more obvious when looking at all aspects of achieving a good way of working by SOSD – Systemic Organizational Systems Design, see this blog post for further information about requirements traceability.

Putting everything together about systems design, we end up with this picture, the complexity of product development, showing the complexity of product development, and the need of systems designing the products in any domain for product development.

Next blog post in the series about SOSD, is about the need of virtual delivery structures in product development.

C u soon 🙂

 

*A common worry today is that our organisations are viewed with the eyes of the engineering metaphor. This means that first the problem to be solved, is reduced into smaller parts to be solved and then the result is aggregated together to a whole, see this blog post by Dave Snowden for more details about this problematic thinking. The worry is at least twofold. One is that the reduction goes into too small parts because of wrong reasons a) to try to reduce complexity and b) “less is more”, making too small activities in the ordered domains, that results in both low flow efficiency and bad resource efficiency, which is both inefficient and ineffective, where b) also making us lose the Big Picture. The other is that aggregation is only possible when we know that the parts fit together, for example in production. Aggregation can only be used by production, which means the Clear domain, and aggregation is a subset of integration. Even though we have done a proper systems design for our product development initiative, we do not know if the parts will fit together at our integration event, which is the reason for using prototypes.  If we have not done a systems design in product development, we are actually doing a false integration, and not a proper integration, since we have not even tried to reduce the complexity. This leads to that we will get a sketch artifact and not a proper architecture artifact.

Leave a Reply