Software Development Cornerstones: Complexity Management

Sergiy Yevtushenko
4 min readJul 23, 2024

--

Less Art, More Engineering

Hardly will be an exaggeration to say that the whole history of software development is the fight with complexity. Moreover, software development as we know it born from the necessity to address the complexity of programming of the first computers. The rapid progress of hardware rendered writing programs in machine code unsustainable. This triggered adoption of the first tools which converted more human friendly representation into machine codes. The rest is history.

Today, when we’re discussing virtually every topic related to software development, we often use the word “complexity”, but assume that “everyone knows it”. Such an assumption is problematic, as different understanding of the such a basic concepts results in significantly different conclusions of different people in identical conditions. In other words, this is one of the sources of the subjectivism in software development.

The complexity is a complex topic by itself (no pun intended). Fortunately, for our purposes, we can use a simplified model.

System complexity consists of complexity of individual components and complexity (number) of the links between components.

The main simplification is that all links are considered identical. Actually, there are different types of links — strong (with different strength), soft, direct, indirect, fixed, movable, replaceable, etc.

So, how can we manage complexity? Or, ideally — reduce it? Since this is not the first time humanity faces this enemy, we already discovered two well-working tools to handle complexity:

  • Horizontal separation or Divide and Conquer
  • Vertical separation or Abstraction Hierarchy

The main idea behind the two is the same — reduce the number of elements and links to consider. But the approach is different in each case.

Horizontal separation creates groups of elements and attempts to reduce the number of links by rerouting and/or grouping them.

Horizontal separation

Typical examples:

  • Enforcing single entry point in structured programming
  • Encapsulation in OOP
  • Microservices (although this often fails because properties of the links can’t be ignored in this case)
  • Decoupling of class/function/module/etc.

Vertical separation also creates groups, but the main tool is the regrouping of elements and reordering/rerouting links to build a hierarchy:

Vertical separation

Typical examples:

  • Programming languages
  • Operating systems
  • Database management systems
  • Single Level of Abstraction Principle

One of the distinctive properties of this technique is the frequent presence of the facade, some kind of interface which represents the whole system to the external entities. The facade could take different forms, for example, API or DSL (like SQL, for example). If it is not explicitly defined as a language, often it is still useful to consider the facade a set of words of the language, which users can use to “speak” to the system behind it.

With properly built vertical separation, external entities are entirely isolated from the internals of the system. In some cases, internals are leaking through the interface, rendering part or whole abstraction hierarchy useless. The term “leaky abstraction” describes exactly this situation.

The most efficient abstraction hierarchy can be obtained by making each abstraction level consisting of a set of independent elements which do not interact between themselves, i.e. the number of horizontal links is zero:

Ideal vertical separation

Actually, the more correct picture would be not a graph, but nested boxes like ones shown below:

Nested boxes

The inner components are internals of the outer components. Interestingly enough, this picture is nothing else than the visual representation of the “low coupling, high cohesion” slogan. In each group, elements decoupled from each other, they do not interact and therefore fully decoupled, but they are used together, i.e. they have high cohesion.

It is important to note that the transformation of the source system not necessarily reduce overall complexity. Grouping elements requires adding new elements — groups — and creating a name for each group. Changes in links may require adding new links or even new elements.

The goal of the complexity management is not to reduce overall complexity but reduce perceived local complexity.

The local perceived complexity is the complexity which we’re dealing with when working with a particular part of the system.

The smaller the set of elements and links we need to consider, the lower the local perceived complexity. Minimizing it to a reasonable level is the only way to make a complex system comprehensible. The question is what could be considered a “reasonable level”. I believe it is dictated by the average capacity of human short-term memory, i.e. 7 (plus or minus 2) items. Use of the lower bound as a maximum seems reasonable to adjust for the worst scenario. Besides, when we’re dealing with the code, usually there are other things to keep in mind. In other words, 5 should be considered a maximal value.

Conclusion

Understanding complexity management is crucial for conscious creation of clean design, assessment of existing technologies and “best practices”.

--

--