Are you stuck in a rabbit hole invaded by “experienced” artifacts?

This blog post will be about tacit vs explicit knowledge, the difference between them, and why it is a necessity that we really are attentive to their difference.

But first we start with a definition of the word artifact. An artifact in product development, software or hardware, is a by-product* to the actual development; intermediate artifacts like a backlog, a prototype, a time plan, a user story and input/output artifacts for the whole system, like a system architecture, requirements, etc. Artifacts are always explicit knowledge, and for example in product development they are intentionally used since they are a necessity for us humans. They can be seen as a way to reduce complexity, or keeping track of things to remember, all done to match the capacity-limitations in our cognitive ability.

Artifacts can also be referred to regarding that we can read about them as parts of methods and frameworks, like a recipe, and also when an artifact is updated with information during its usage. Only focusing on the explicit knowledge is a huge issue in organizations of today, trying to adapt, adopt, change or transform to a new way of working, by trying to copy someone else’s method like the Spotify model. Benchmarking for example copies only the most visible and explicit parts of the process, missing the important implicit ones. Organizations of today are also focusing on trainings, from which we also get only explicit knowledge, even though self-directed exercises can give a very small part of implicit knowledge. With a too high focus on the validity of explicit knowledge in frameworks, we will also miss out the part about if this framework will solve our problems, or if it even will be better than before.

Let us take an example with architecture. The word architecture is more than 2000 years old. Its meaning today, is both the process and the product (the artifact) of designing and building, as distinguished from the skills associated with construction [1]. An architecture as a product (input/output artifact) can be an architecture for as different domains as a building or a software product and it is achieved through the process of architecture. In product development for both hardware and software we would call this process systems design/engineering, a transdisciplinary process where the actual complexity is reduced iteratively, finally resulting in the architecture artifact**. Without the systems design/engineering we will not get an architecture artifact, only a sketch artifact from our hypothesis, no matter how many times we will iterate the sketch.

Tacit knowledge is such as skills and experience [2], which can be very hard or impossible to explicitly transfer to other persons, such as riding a bicycle, playing a musical instrument, or to do a systems design. It is also important to add that to acquire tacit knowledge for a method or framework and adding that to the CV, means to be part of the whole process until the method or framework actually is fully implemented and proven to be working as well. This means that we cannot just be part of the start-up phase ten times, and then in the CV state that we have skills and experience of the whole process, and definitely not state anything about if the method or framework would have worked at all. Explicit knowledge on the other hand is knowledge that easily can be transferred to other people [3], and can mostly be stored in different media. Explicit knowledge is often seen as complementary to tacit knowledge. In our example with the architecture above, the architecture process is where tacit knowledge is really required, resulting in the architecture product (the artifact). That means that just because we read about the artifact, does not mean that we understand the process behind it, since that still requires additional theory, skills and experience. This is especially valid since managing an organisation, a complex adaptive system as it is, is about managing interactions between the people and interdependencies between the activities, truly showing that artifacts only are complementary.

The need to spread tacit knowledge around the organisation, was one of the main conclusions in the article “The New New Product Development Game” from 1986 [4], which the authors referred to as “organizational transfer of learning”***. Only having Lessons Learned from earlier product development (projects), or joining educations, is simply not good enough, since that is only about artifacts, i.e., explicit knowledge.

The problem arises when we think that we can get rid of the tacit knowledge, peoples’ skills and experience, just by documenting it so we get explicit knowledge, and we are fine we that. If this was true, it would mean that we could attend a course, and the next day we could teach other people that course, since the explicit knowledge could just keep going on. And just because we teach others, we are suddenly referring to ourselves as experts. As you understand that is only silliness. Dave Snowden often refers to this erroneous thinking, since knowledge management already in the late 1990s failed to show that tacit knowledge was available for other people after documentation [5]. Even more troubling is if the explicit knowledge actually is wrong or important parts are missing. Due to the lack of tacit knowledge of the teacher only learnt from the explicit knowledge, it means that many people will be educated, before the incorrectness even is revealed. This is especially treacherous in new product development of large system products, that can have very long feedback cycles of the total system. And here is where the artifacts come into play, since artifacts are only explicit knowledge, that we read about for example in product development in a method or framework. This means that we can never only rely on the artifacts, or how it is built, instead of the problem we actually want to solve. If we do so, we are unfortunately starting to treat the artifacts as built up of tacit knowledge, a one size fits all structure of artifacts, building up to a process, that will solve any problem for us. This means that we will be totally blind when a new way of working is required, here are some different situations, where we really need to be vigilant:

  • context shift, when we go from an easy context to a complex context
  • scaling, when we go from small systems to big systems.
  • a shift to a lower uncertainty, when we know the customer need

Instead, we many times need transdisciplinary tacit knowledge in order to reduce the complexity of our problem, which then makes it easy to see what artifacts that we really need.

A very interesting and apt example, is a systematic case study that was made in Jan 2013 [6], with the aim to “develop an artifact model for agile development”. This was done by searching on internet for publications consisting of typical terms of agile development combined with the term artifact, which resulted in hundred found publications. The search interval was from the date of the Agile Manifesto 2001 up to Sep 2012. The developed artifact model, with its 19 commonly used artifacts and their relations, describes very well the process of agile development, where Scrum, XP and Kanban were among the top five processes found. The study also states that the most common practices are TDD (Test-Driven Development) and Refactoring****. Going back to the architecture artifact discussed above, it is missing in the developed artifact model. But this is not strange at all, if we refer to the Agile manifesto principle number 11; “The best architectures, requirements, and designs emerge from self-organizing teams.”, in combination with the Agile Manifesto value “Working software over comprehensive documentation”. At the same time, it is neither strange that refactoring then is one of the most common practices in agile software development, since without an architecture artifact, it means low focus on the total architecture from start and also during the agile development. This in turn means that the software code needs to be continually restructured, not only for the architecture of the respective components, but also for the total system architecture, in order to retain the system quality attributes (sometimes also called system qualities or only qualities) of the software, i.e., the non-functional requirements. The need of refactoring is especially valid for agile development due to the common reduction of the architecture artifact into only a sketch. This is very different from traditional (waterfall) software development, where the systems design aims for generating a proper architecture artifact. With a few agile teams, making a smaller system, the system validation (the right product) and system verification (the product right) loops is feedbacked every iteration or increment. This makes it possible to understand not only when refactoring is needed, but also that it is possible to have an emergent architecture for small systems with low complexity, since it is possible to use refactoring, in order to successfully restructure, not only components, but also the total system architecture.

But, when developing large software systems, no matter how the teams work together, the system verification feedback does not go together with the system validation feedback loop for every increment or part of the development. This means that we need and can have a good architectural skeleton (our system) also from start, or at least some idea of the architecture that we should verify as soon as possible as well as continually during the development. We can think about it as TDD for the big system, meaning we need to think Test-Driven Systems Design (TDSD) as well. In this way we will achieve fast feedback also on our total system with its architecture, which is close to thinking of Set-Based Design and multiple concepts at Toyota (Lean Product Development) or prototypes in hardware product development. And in most cases when we are building big systems with massive back end, for example governments, banks, insurance companies and other service companies. This means that the time to market is not the most important thing, instead non-functional requirements like reliability, performance efficiency, security and suitability are, which leads to that we both have the time and the necessity of making a proper systems design from start.

And since we do not have an architecture artifact in agile development, we will then not have the architecture process to achieve it either. Over time, the risk is then high that the craftmanship of the architecture process necessary for large systems in software development will be lost, the skills and the experience built up. This is due to the fact that we only build on methods and frameworks using artifacts and processes, that was originally developed for smaller systems. Without this transdisciplinary craftmanship, we will only get a sketch artifact instead of an architecture artifact. Even though they may look the same, it is only the latter one that is (systems) designed. If we also think that all work will be done in the agile teams also in large software system products, we will have an even higher risk that this transdisciplinary craftmanship will get lost.

Two other closely related things to be observant to is first Kanban boards as artifacts, since it looks like they describe the actual process steps with the columns as well as the inputs and the outputs. We need to see Kanban boards only as examples, and not like a prescription. This is especially valid at scale, if we are using a Kanban board similar to the ones used by our agile teams, since reducing the complexity of the wholeness by a systems design is missing. Also, the integration to a whole again and the systems testing will neither be seen. To lean on explicit knowledge and artifacts when scaling or changing to a more complex context, instead of using our tacit knowledge and critical thinking, is always a bad idea, since the risk for failure is high.
The other thing to be observant to is Continuous Integration (CI) as an artifact when scaling, since the word Integration, does not mean integration like in an Integration Event (IE), that is traditionally used. Rather CI is only an aggregation of parts where only coherence on the system level is fulfilled, especially when it comes together with the issue with the Kanban board artifact above where the systems design is missing. An IE instead is a planned event where an integration between system designed parts occur, and where a system test also occurs to validate and verify that the parts together work not only as a coherent, but also a cohesive system.

To summarise this blog post, we can state that the artifacts will never do the work for us, it is meant for reducing complexity for us humans, and is keeping track of things for us, but it is still only explicit knowledge. It also important to point out that just because we can interpret an artifact and even understand how it works, does not mean that we understand why it works, the theory behind, which in turn means that we cannot make one ourselves. Without that tacit knowledge, the risk is then high that we even will miss necessary artifacts, or that for example the architecture artifact only becomes a shallow sketch, like a finished picture puzzle. Instead, as always, we need skilled and experienced people to do the job, on all different “levels”. We must also put extra attention on the transdisciplinary top-down work, especially for bigger products and systems, which requires tacit transdisciplinary knowledge for systems design and specialist knowledge of the non-functional requirements, like IT-security, fault handling, monitoring, safety, cybersecurity, communication, traceability, law, etc. The solution to these non-functional requirements needs to be taken care of top-down, and is something completely else, then the false belief that coordination of parts, which solutions in the end can be aggregated to the whole, is enough.

We simply need this transdisciplinary top-down work in order to be sure that our teams, agile or not, start to work with what they know the best with their tacit knowledge on the right parts, that they as soon as possible will integrate to a unified and well-functioning whole. Sometimes, though, it is easy to focus too narrowly on the implementation of commonly used best practice artifacts, instead of figuring out how our own way of working shall look like, so we can solve our own problems, in our context and our domain.

A warning here is apt for especially service organisations like banks, insurance companies and governments, that used to buy in the software products in order to offer service to their customers, but nowadays instead are developing the software themselves. These organisations are often, not only lacking the tacit transdisciplinary knowledge for systems design. Their specialists for the non-functional requirements as described above, are neither used to work in “transdisciplinary” teams in order to solve the wholeness together, rather to inform and control the suppliers regarding their respective expert area.

This makes it cumbersome, especially for these organisations, when the scaling agile frameworks trying to divide, not only the whole into pieces, without first doing a proper systems design on the non-functional requirements on the whole and also without first looking for the total of customers’ journeys and the current problems of the customers. But also, that they are trying to divide, not only the explicit knowledge, but also the tacit knowledge of the specialists at higher levels, into explicit knowledge that the teams can absorb, This, so the teams can solve also non-functional requirements themselves, even though they do not have any explicit or tacit knowledge, i.e., no experience or understanding of the transdisciplinary complexity/ complicatedness, that need to be reduced for the whole. The dividing everything into parts, are done by the scaling agile frameworks in order to fit the engineering metaphor, which looks easy. But, the dividing of the whole into pieces, is only appropriate for production, but never for product or system development, due to the transdisciplinary complexity/ complicatedness that need to be reduced. To rumble this for these organisations, is not easy, since it looks the same easy as the engineering metaphor always has looked. Especially since their managers seldom have any knowledge or understanding of reducing complexity, since they did not need it before, but they are still in charge.

That was all for this time. C u soon again.

 

*We can divide the different by-products in two categories, intermediate by-products and input/output by-products. The intermediate artifacts, like backlogs and prototypes, we can discard when the actual product is released. The input/output artifacts, like the system architecture and requirements are necessary for both starting up the development, as well as for coming additional requirements and bug fixes after the release. This in order to continually up until the release, as well as after the release, have good control and structure over the system, a necessity for us humans.

**we need to consider our first architecture like a prototype, a first skeleton of the system, meaning that we need feedback as soon as possible, so we know that our system architecture is on the right track or not. The reason for this is that for big systems, the complexity is too high to be able to make the perfect specification of an (systems designed) architecture.

***There conclusion also challenges the thinking about always having static teams. But maybe static agile software teams actually doing the work, is not the problem, instead the problem is that the transdisciplinary team needed for reducing the complexity top-down cannot be static. This depends on that the initial problem to solve top-down is mostly different from last time, then the actually coding of the functionality is for the agile team.

****Refactoring as a term in software development is from 1990 (a decade before the Agile Manifesto), even though code refactoring has been done for decades earlier than that [7]. Refactoring is intended to improve the design, structure, and/or implementation of the non-functional requirements on the software, while preserving its functionality. Decomposition is another word for factoring [8].

References:

[1] Encyclopedia Britannica. Link copied 2021-05-31.
“architecture”.

[2] Wikipedia. Tacit knowledge. Link copied 2021-05-31.
Tacit knowledge – Wikipedia

[3] Wikipedia. Explicit knowledge. Link copied 2021-05-31.
Explicit knowledge – Wikipedia

[4] Takeuchi, Hirotaka and Nonaka, Ikujiro, ”The New New Product Development Game”. Harvard Business Review, Jan 1986 issue. Link copied 2018-09-05.
https://hbr.org/1986/01/the-new-new-product-development-game

[5] Snowden, Dave. Blog post. Link copied 2021-05-31.
ASHEN revisited – Cognitive Edge (cognitive-edge.com)

[6] Gröber, Matthias. Master’s Thesis Jan 2013. Link copied 2021-05-31
mg-thesis.pdf (tum.de)

[7] Wikipedia. Code refactoring. Link copied 2021-05-31.
Code refactoring – Wikipedia

[8] Wikipedia. Decomposition. Link copied 2021-05-31
Decomposition (computer science) – Wikipedia

Leave a Reply