Dissolution of problems in organisations (Human Complex Adaptive Systems) – part 1/4 – Introduction

Today’s blog post is the first and introductory one in this series, and will go through what Dissolution of problems in human Complex Adaptive Systems (CASs) means, where organisations is the main focus. Dissolution of problems (a term by Dr. Russell Ackoff [1]) and its relation to inductive and abductive approaches, will be an important part of the discussion. In the next post, the method of Dissolution of problems in organisations is presented, and in the third blog post we will go through the theory behind. The last blog post in this series will deepen in why we always need to start with trying to dissolve our problems, before we (even think about) using only abductive or inductive approaches for organisational problem-solving. The latter approach is especially treacherous, since its main focus today is only on the effects we will get with the new bright shiny, and not on explaining why a new framework or method actually will do the job. By having that focus, we cannot get rid of the current problems within the organisation to transform, since the problems of the organisation in focus are never even looked for.

First in this blog post will also be a, short overview what need to be followed for every new way of working, method or framework, in order to make an organisation flourishing, to avoid the symptoms and consequences (that Dissolution of problems will find).

When we are setting up our organisation from the beginning, we are always under the laws and principles of science, natural science for the products and the activities we are doing to achieve the products, as well as complexity science for complex activities and the anthropological principles regarding the cognitive ability of us humans. If we do not follow the science properly, we have a high unnecessary risk for failure. If we do not follow the science, we will only introduce tremendously many problems for ourselves, which we cannot just state as uncertainty, because they have already happened, the wrong decision to not follow science, has already been taken in the past. If we do not follow the science, we will in the future cause ourselves “a mess – a system of problems”, an expression that Ackoff stated, so our situation becomes even more blurred or confused. If we do not detangle our mess of symptoms and consequences, by finding and solving the root causes, we can never judge the amount of uncertainty (WHAT product to make) or/and complexity (HOW to make the product) we really have, and we can definitely not directly solve our mess. We really must start to ask ourselves why our organisation does not work properly, and also despite the endless list of methods and frameworks to choose from. Dave Snowden brings this why-question up recently [2], where he discusses Peter Senge’s book The Fifth Discipline from 1990, which led to the approach Learning Organization, that Senge coined himself. Snowden states that “We are now three decades on, and we should maybe start to question a little why it, and its successors have not worked.”. But already 1995 [3], Ackoff listed methods and frameworks at that time and the decades before, that did not work because they were antisystemic. So, we therefore also need to talk about methods and frameworks preceding Learning Organization as well. Learning Organization is also included in the list of about 20 methods [3], originating from national surveys [3], which Ackoff mentioned as antisystemic back then.

 

Here Ackoff’s Dissolution of Problems, how to think in order to avoid being antisystemic and get rid of organisational problems one time for all, comes into the picture. We can simply only change the system or the environment in order to get rid of many problems within it, where changing the environment rarely is an option for an organisation.

 

When we already have our organisational problems, we still have many hard facts to proceed from. All problems that we can observe in the organisation is all taken together hard fact, and is also the best leading indicator on if the way of working is mal-functioning. Observed problems can never be outrivalled by any measuring, not only as leading indicator, or that they problems are hard to game, but of course maybe more obvious, as a clear start of finding and solving the root causes. The Cobra Effect also shows us that we in hindsight can understand that we have effect-cause chains also in human systems, even though it sometimes is hard to find them, if we cannot observe important problems, which is possible in an organisation. Another hard fact is that we, according to complexity theory, cannot isolate a problem (frame it) within a system and try to solve it, since that will only lead to unintended consequences. A last important hard fact to mention is that symptoms and consequences cannot be solved, only root causes. That only root causes can be solved is straightforward, since the root causes originates from non-fulfilled science in the first place. Dissolution of Problems has all these mentioned hard facts as its foundation.

 

Let us now elaborate on uncertainty and complexity, starting with uncertainty. Uncertainty is about what action (for example a decision) to take in the current now, to hopefully achieve a future effect, even though the future always is unpredictable. But, to take an action on the current situation, without any deeper analysis, is to take an action on a self-caused entanglement of problems. Many of these problems are symptoms and consequences, which never are reasonable to attack, since symptoms and consequences cannot be directly solved. If we still try, we will have a Walk in the Dark, since solved symptoms and consequences never will aggregate to a solved root cause. The first step should therefore always be to collect the problems within the organisation, and try to find root causes. With the found root causes in our hand, we have brought earlier mistakes into the daylight, having the necessary information about the past, so we can solve the root causes, and by doing that, take the right actions in the current now. But, since the environment the organisation is operating in, is constantly changing, it can mean high uncertainty when a decision is taken, for example about the content and price of a tender will accept or what product to make that the customer will buy. This leads to that in such situations with high uncertainty about the future, even when our way of working is perfect, we can only track us back to the decision taken, and hopefully learn something for the future even though the exact start condition for the decision will never be repeated again, but we cannot change the outcome.

When Snowden talks about ontology (how things are), epistemology (how we know things) and phenomenology (how we perceive things), he states [4]

“… and increasing the alignment between the three is key to coherent action.”,

which by finding the root causes to our caused problems, makes it possible to understand the context (ontology), and really give us a solid base for our next action (epistemology), in order to dissolve a majority of our entangled problems that we perceive (phenomenology). The need for always starting with Dissolution of problems and the collection of all symptoms and consequences perceived in our organisation, will be further deepened in the last blog post in the series, as mentioned in the prologue to this series.

Dissolution of problems is only one step, as well as the first step, and most often the only one needed, in a “Systematic approach for a systemic learning” approach, used for solving organisational problems, and by that, achieve a systemic learning in the whole organisation.

There are different ways to describe a Complex Adaptive System, a CAS, and the most common is to describe it as containing agents with interrelationships. About CASs, our complexity guru Dave Snowden* has stated; “The only thing we with certainty can say about a CAS is that any intervention with the system will have unintended consequences” [5] or “More importantly any complex system is an entangled weave of boundaries, agents, probes and the like. If you cut it you destroy it.” [6], where the Hawthorne effect [7] and the Cobra Effect [7] are good examples of interventions that lead to unintended consequences (effects). This means that we cannot intervene with any agent in a live system, not model it nor measure on its parts, since then we will sub-optimise the system; or as Ackoff always put it “all interventions with the agents are anti-systemic**”, egoistic is an apt word since any intervention then will generate a sub-optimisation in the system.

The implication of this is that any interventions in a complex adaptive system will cause unintended consequences, with an unknowable what, when and where. This means that inductive techniques (bottom-up logic), no matter if they are based on many cases or not, will be very problematic, since they will set up an ideal state and try to close the gap. This indirectly means that they never will look for the root causes, and if the inductive approach solves a root cause anyway, it is only by chance. This means that inductive approaches always will start chains of unintended consequences, since they are trying, without a clue of the real root causes, try to solve the problems, the symptoms and consequences directly (impossible), and are therefore not suitable for neither complexity nor uncertainty. Abductive techniques (some of them) on the other hand, are more suitable for organisational problems mostly regarding uncertainty about the future, but not the ones where we can learn from the past. They are trying to avoid premature convergence of the solution, by solving smaller problems (kind of nudging), and with fast feedback analyse the result, in order to take next small step. An example of an abductive approach is mapping the dispositional state in a situational assessment and identifying adjacent possible states in the fitness landscape, and then make small hypothesis and experiments and draw conclusions from the new observations. One tool for this is SenseMaker® with big similarities to making hypotheses and do safe-to-fail experiments in parallel, see Dave Snowden’s blog posts for more information [8], [9]. Snowden has also since some years back developed an approach called constraint mapping [10], “as a key approach to understanding, navigating, and managing complexity”, he states. He continues that this is important in order to try to avoid a premature convergence when acting directly on the situational assessment, as well as that “constraints are things that we can manage in a complex system, and they are also things that we can know. “.

But, any intervention with (the agents within) a human system gives unintended consequences no matter if we are using inductive or abductive approaches. This means that the list of things we so far have found out that we then need to cautiously consider, when trying to solve the problems within a human system, is very long, and the number of items is still counting. Here are some examples to consider if we are trying to intervene with a human system: dealing only with parts of the system, modelling the system, measuring effects on parts of the system, “everything new will result in better effects” talking (counterfactuals), ignoring observed hard facts, ignoring science, non-pragmatism (idealism)(manipulation), context-free (universal) solutions (for example copying manufacturing solutions to product development without consideration), that confusion and incomprehension must be part of the process until enlightenment can be reached (manipulation), hypothesis without consideration is the way forward (manipulation), that a transformation will take 5-10 years as the new normal (manipulation), premature convergence on a solution, directly changing people, only one (or too few independent parallel) observer, the need of weak signal detection, (quick) decisions from data privileging past experience, the aggregative error, compliance problems with the Pareto distribution (when referring only to the normal distribution), just ask questions in facilitated conversation, open space techniques, confusing coincidence with causation (Post hoc ergo propter hoc), confusing correlation (consistent coincidence) with causation or correlation is not causation (Cum hoc ergo propter hoc), retrospective coherence, replicate the emergent success  of someone else, cognitive bias, cognitive heuristics, post hoc rationalisation, controlling or manipulating the outcome of the process, top-down approach to change, bottom-up approach to change, norming and pattern entrainment, bias at situational assessment, expert bias, mediation/interpretation/screening of data before reaching the decision-maker, courtier syndrome, strange attractors, Delphi techniques, Hawthorne effect, dark constraints***, predetermination of only positive stories, talk about how things should be (effects), idealistic state as a goal and closing the gap, analysis paralysis, theory without validation (case-based or inductive approaches), interpretative conflict, unobjectivity****, organisational non-transparency, framing, “the eyes of the investigator”, root cause analysis only as a lag indicator or isolated paths, general challenges for root cause analyses as only finding one root cause out of many [11], missing a catalyst, only lagging and no leading indicators or late leading indicators as attitudinal measures, partial abstractions, gaming, engaged facilitation, idealistic facilitator, opinions about the future or the past influencing how the present is seen, relationships of people involved in a work shop or assessment, inattentional blindness, patterns of group interaction, sub-optimisation and the impossible symptoms solving that only will generate more symptoms and consequences, etc., where some of them are what Dave Snowden calls “the tyranny of the explicit” [12]. Many of them are also an outcome, new symptoms and consequences, originating from the impossibility of trying to solve symptoms, which is the same as sub-optimisation. Sub-optimisation always means that we are not heading towards the roots of our problems, but instead in the diametrically opposite direction, meaning more and more unintended consequences, since we are trying to solve symptoms and consequences. This makes it possible for the list above to be infinitely long, which is why it is still counting. This means that we need to be careful also with abductive techniques, not only the inductive ones, when we are having problems that are symptoms and consequences, since both approaches means interventions of some kind, directly on symptoms and consequences, which are impossible to solve.

Kind of tricky, right! So, what can we do about it?

Instead, we need take another approach and re-think about in what cases we can take advantage of hindsight, we frankly need to think differently, or as Einstein stated:

Without changing our pattern of thought, we will not be able to solve the problems we created with our current patterns of thought.

Taking advantage of hindsight for organisational problem-solving can be done at significantly higher levels than can be seen today in methods and frameworks, and also compared to how uncertainty, complexity and ambiguity generally are presented. By taking this advantage, we can in the light of the current knowledge (problems seen), change the actions of today, by solving the mistakes (not fulfilling the principles) done in the past, which then leads to dissolving the problems of today. That is kind of neat, also since we completely avoid unintended consequences by never nudging a live system*****.

So, it is not about changing the future from the current situation, like intervening with the current fitness landscape, since this will generate many of the items to consider in the list above, due to the unintended consequences that will be generated from the interventions.

Here the Cobra Effect is an apt example to have in mind not only regarding unintended consequences from interventions (bounty), but most of all due to the fact that the Cobra Effect means that there is effect and cause in human systems. Because, it was impossible for the Englishmen to understand the sudden increase of cobras at first. But when the cobra farms were found, the missing piece (symptom from the sub-optimal strategy of bounties for cobras) were found. This meant that the effect-and-cause chain could in hindsight be completely drawn backwards in time from the last effect to the first cause, which is the essence of a root cause analysis.

With the Cobra Effect in mind, we can also see the importance of having all symptoms needed in order to conclude the effect-and-cause chain down to the roots of the clearly visible problems (increasing pay-out of bounties and increasing population of cobras). And with all things to consider above, we really need to have a systematic approach to achieve true systemic learning in our organisation, so we can avoid the need to consider the items in the list. We need a very easy, straightforward and systematic way of looking at problem-solving in our organisations in order to have a be able to have a systemic learning. This is very important, so we do not apply continuous improvements methods on the system level or parts of the system in focus, before we have solved our problems within our system. The first step (and many times enough) of “A systematic approach to systemic learning” is a true eye-opener for detangling the problems, the symptoms and consequences, in an organisation.

Of course, there is no perfect way of working, which means that we always will have some kind of symptoms, and maybe also consequences. Our way of working always need to take into account the number of people, domain knowledge (I-competence), integrative knowledge (T-competence), current other knowledge, current tools, etc., which will change over time, giving us new symptoms and consequences as well. But of course, even with our problems solved, we can always get better, by continually tweaking our system a la Kaizen, even though the effects of them will become smaller and smaller.

 

But, to just let go with a hypothesis, which means trying to solve a symptom, is actually always a very bad strategy, especially if we can find the root causes and solve them instead. That is why we always should start with Dissolution of problems, i.e., making a root cause analysis.

 

To achieve this systematic approach for a systemic learning, we may need to combine all the three approaches; deduction, abduction and induction, where the former, Dissolution of problems, always is the starting point in order to detangle our mess of problems. The necessary combination of different techniques, has also been noted by Snowden, which states the following when he alludes to root cause analyses; “It is also important to realise that even in a deeply entangled system there are some cause-and-effect pathways – we need to be flexible here.  Learning can be achieved in many ways and excluding more traditional and structured approaches is almost as bad as claiming those approaches are all that is needed.” [13].

In the next blog post, we will go through a short version of Dissolution of problems.

C u then.

 

*At Cognitive Edge, the father of the Cynefin™ framework

**Do not mix anti-systemic with non-systemic. Non-systemic means that there are no side effects. A good example where there are no non-systemic cures, only anti-systemic, are medicines taken orally trying to cure a symptom in the body. This is because the body is a complex system too, which means that no symptoms in the body can be cured with any medicine without effecting parts of the body with side effects. Instead, the root cause(s) to the symptoms need to be found to really cure. The same goes for our organisations and only the root cause(s) to the symptoms can be solved, since trying to solve the symptoms mean more other symptoms, that can be hard to foreseen and never directly be solved either.

***dark constraints; a term by Snowden, that is affecting the current behaviour of the complex (adaptive) system, but where we only can see the effect, but not the cause of it or modulating factors [14]. If we look at the current symptoms and consequences originating from root causes, where root causes are non-fulfilled organisational principles, it is not a wild guess that dark constraints correspond to these non-fulfilled principles. This means that dark constraints are not as mystic as their name may suggest.

****unobjective [15]; not possessing or representing objective reality. This means that when someone states something that is unobjective, for example 1+1=42, it is not a subjective statement, and therefore not an opinion, it is only plain wrong, since 1+1=2. Organisations that accept unobjectivity when transforming to a new way of working, will not only create a way of working that is malfunctioning, but before that, a deep polarization within the organisation will be created.

*****Snowden also states the importance of avoiding direct actions on the current situation; “mapping and changing constraints avoid a direct connection between situational assessment and action.” [10], where constraints correspond to principles as a way to change the interactions of the system.

 

References:

[1] Snowden, Dave. Blog post. Link copied 2021-08-04.
Twelvetide 20:11 Coherent to what? – Cognitive Edge (cognitive-edge.com)

[2] Snowden, Dave. Blog post. Link copied 2021-09-12.
Learning: an anthro-complexity perspective – Cognitive Edge (cognitive-edge.com)

[3] Ackoff, Dr. Russell Lincoln. Speech. “Systems-Based Improvement, Pt 1.”, Lecture given at the College of Business Administration at the University of Cincinnati on May 2, 1995.
The list at 03:30 min, the national surveys at 03:40, and the explanation of antisystemic at 04:48 min. Link copied 2018-10-27.
https://www.youtube.com/watch?v=_pcuzRq-rDU

[4] Snowden, Dave. Blog post. Link copied 2021-08-09.
Separated by a common language? – Cognitive Edge (cognitive-edge.com)

[5] Snowden, Dave. Blog post. Link copied 2021-08-04.
Cutting through a weave destroys it – Cognitive Edge (cognitive-edge.com)

[6] Ackoff, Russell Lincoln. Article. Link copied 2018-12-15.
https://thesystemsthinker.com/a-lifetime-of-systems-thinking/

[7] Snowden, Dave. Blog post. Link copied 2019-06-04.
Of effects & things – Cognitive Edge (cognitive-edge.com)

[8] Snowden, Dave. Blog post. Link copied 2020-12-12.
The dispositional state – Cognitive Edge (cognitive-edge.com)

[9] Snowden, Dave. Blog post. Link copied 2020-12-12.
Power laws & abductive research – Cognitive Edge (cognitive-edge.com)

[10] Snowden, Dave. Blog post. Link copied 2021-08-02.
Returning to constraints – Cognitive Edge (cognitive-edge.com)

[11] Wikipedia. Root cause analysis. Link copied 2021-09-05.
Root cause analysis – Wikipedia

[12] Snowden, Dave. Blog post. Link copied 2021-07-20.
Yes but… (and the Isaiah moment is still with us) – Cognitive Edge (cognitive-edge.com)

[13] Snowden, Dave. Blog post. Link copied 2021-08-11.
Root ’cause’ & complexity – Cognitive Edge (cognitive-edge.com)

[14] Snowden, Dave. Blog post. Link copied 2021-07-06.
The tyranny of the explicit – Cognitive Edge (cognitive-edge.com)

[15] Merriam-Webster. Unobjective. Link copied 2021-10-03
Unobjective | Definition of Unobjective by Merriam-Webster

Leave a Reply