top of page
  • Writer's pictureBrain Dumps

Relevance of system decomposition overhead

I often thought about the "divide et impera" (Latin for "divide and rule") strategy. Originally, I think this concept has been used to describe how ancient empires ruled and I remember it being associated with government forms during the Roman Empire. When I try to generalize this concept, or at least try to look at it under a systemic lens, it is interesting for me to notice that this way of thinking seems, more often than not, still very much present in today's general way of thinking; and I understand all the reasons why it is probably so. One note: I read the "Divide and rule" Wikipedia page and the connotation given there is anything but positive since it is associated with forms of authoritarianism.


Although it might seem obvious that complex systems cannot be managed efficiently and effectively in a centralized way, it is important to invest some energy in understanding how the system is being broken down into different parts in the first place. I would love to see if there is scientific research - if you know of any, feel free to drop a comment - that indicates how much division and depth, given a certain context, is "good enough" to decompose a system coherently and efficiently, and what kind of methodology can help find such a threshold.


In my understanding, if this "dimension" (i.e. the complexity of the system or process that implements the solution) is not addressed properly, the risk of over-structuring or under-structuring the system is so that:

  • in the best scenario, does not allow the solution to either up- or down-scale when needed;

  • in the worst scenario, makes the system so inefficient that it will not serve the purpose for what the system was thought of in the first place, that is implement the desired solution.


What I think is being neglected, while setting up systems and processes for a solution, is that dividing a system into sub-systems carries an intrinsic overhead, given by the dependencies between all the parts. I would even push myself to claim that the cost of this overhead, which I think has at its core a communication cost, is probably the main factor contributing to the overall system's effectiveness. I guess what I am after is how much of this sort of System Decomposition Overhead (SDO) is acceptable, given a specific context, for such a system to be as good as possible.


Another concept that I think is relevant is the adaptability of the system itself, not necessarily for scaling purposes, but even "just" to allow iterative re-assessments of the solution. Even if we assume that the context does not change, as time goes by what evolves is the understanding of the problem and of the solution itself for which the system was put in place. In my view, this is enough of a reason to justify the existence of an adaptive process meant solely to control and improve the system.


I don't know if I'm going anywhere with this post. It feels that generalizing the process - or what I think I am trying to generalize - would require tapping into disciplines in which I have no expertise. On the other hand, perhaps, this level of generalization ends up exactly in a discipline that is so abstract that almost feels like a "cult" (philosophy? First-principles thinking?), at least until a more formal description is available. I guess, I just felt I needed to do such a brain dump.


Picture created by the author using GenAI

43 views

Recent Posts

See All
bottom of page