top of page

Thoughts on Systems and Processes

  • Writer: Brain Dumps
    Brain Dumps
  • Jun 21
  • 5 min read

Updated: Jun 22

Some time ago, I heard something that I thought was absolutely brilliant and gave me some serious food for thought. The thinking was something along these lines:

A process designed for decision making should not be more complex than the technical system for which such decisions are to be made.

Somehow, this resonated deeply but I knew it was not easy to explain. I was already conscious of the Problem of Induction, therefore, I did not attempt to generalise it. I was convinced, however, that it was still interesting to try to understand the concept better, even if “just” in the context in which it was mentioned. Back then, I hinted that a good starting point, perhaps, would have been the conceptual difference between a process and a system. Little did I know how this path could be challenging...


One important remark to be made before moving forward. It might be tempting to call upon Ashby's Law of Requisite Variety, but we would risk forgetting that, in terms of applications, Control Theory and Cybernetics are likely valid only in Engineering and Mechanics, as the field of Anthro-complexity is showing (i.e., humans are not machines).


There is another concept I feel like bringing into this brain dump at this point (ref):

Many people in Systems Thinking say systems are defined by boundaries. In Complexity Science not all systems have boundaries and this means that the Second Law of Thermodynamics does not apply.

This did not come as a surprise to me after all, even though I admit that after hearing it, I thought I needed to review pretty much everything I have been writing in my brain dumps. Not that I have a problem with changing things if they were not correctly expressed or if, given new evidence, they don’t make sense in my mind anymore. On the other hand, doing this would have required a new brain dump, and here we are.


In “System Dynamics and the Strive for Harmonious Balance”, I referred to the Second Law of thermodynamics expressed in terms of spontaneous processes, knowing possibly very little about its meaning. This brain dump is probably an attempt in trying to "codify" what I know at this point, conscious that there can very well be fundamental gaps in my knowledge, and filling these does not necessarily require new concepts.


What follows is a summary of some concepts I already expressed in "Impact of feedback loops on cognitive overload", extended for the purpose of this post:


System's Nature

Characteristics

Processes

Thermodynamics Entropy (1)

Information Entropy (2)

Closed

Can exchange energy but not mass across its boundaries with the environment

Spontaneous

Always increases

Differential Entropy? Increases?

Isolated

It cannot exchange either energy or mass across its boundaries with the environment

Perfectly reversible

Constant

Differential Entropy? Constant?

Open

Can exchange both energy and mass across its boundaries with the environment (3)

Non-linear, path dependency, irreversible (hysteresis), retro-causality

Gibbs Entropy?

Differential Entropy?

Notes

(1) Thermodynamics Entropy is a direct description of the physical energy or heat in the system.

(2) Information Entropy describes the system's average level of information, uncertainty or novelty.

(3) What is meant by “open systems do not have boundaries”? Is the boundary absent and, in this case, one could claim that there are really no separate systems, or is it rather as described here that mass and energy can be exchanged across the boundaries?


Concerning entropies, I have a couple of considerations lying around in my notes that I think are worth sharing:


  • "In classical statistical mechanics, in fact, the statistical entropy is formally identically to the Shannon entropy" (ref)

  • "Entropy in information theory is directly analogous to the entropy in statistical thermodynamics. The analogy results when the values of the random variable designate energies of microstates, so Gibbs' formula for the entropy is formally identical to Shannon's formula" (ref)

  • Postulate - “The entropy as defined by the Gibbs entropy formula matches with the entropy as defined in classical thermodynamics“ (ref)



Therefore:

  • Classical thermodynamics entropy = (postulate) Gibbs entropy = (if values of random variable designate energy of micro-states) Shannon’s entropy (see also ref)


It is also worth keeping in mind something I already wrote in "The natural flow of energy":

  • Constructor Theory plays a more fundamental role than Shannon's theory of Information because it aims to answer the following question: "Which properties of the physical laws allow for a physical system to contain information?" Or, in different words, what is the set of those properties that allow for the existence of physical objects that contain information?


Complementing the table above, what follows is another step in the mapping effort of what I currently think I know:


System’s Behaviour or Evolution

Characteristics

Processes

Energy

Thermodynamics Entropy

Information Entropy

Static / Stable (?)

Unchanging or changing in an uninteresting / predictable way

Completely deterministic, known a priori, causality fully known, control feedback loops, “A stable System is one where the process is in statistical control” (ref)

Energy is spent solely in maintaining a state of equilibrium / status quo. The Principle of Least Action can help find the stationary equilibrium points. What about the others (i.e., unstable point of equilibrium, kept stable by energy injection in the system)?

Depends on the System’s Nature

0 (no novelty, all is known / predicted)

Growing / Improving

Development, Kaizen

Dynamic, adaptive, positive feedback loops

Energy gradients (Estuarine mapping?), micro-nudges

Depends on the System’s Nature

Differential Entropy?

Degrading

No coherent heterogeneity, reduced innovative capacity

Dynamic, negative feedback loops

Energy not spent or spent solely into the fast (i.e., little available time) functional equalisation of the system

Depends on the System’s Nature

Differential Entropy?

Sustainable

Capable of enduring for a specified and arbitrary long time (1)

Scale invariant (?), non-ergodic (sustainable over which period?) (2)

Energy gradients (Estuarine mapping?), micro-nudges, evolution as an energy minimisation process

Depends on the System’s Nature

Differential Entropy?

Notes

(1) Sustainability does not necessarily provide information on the trajectory: i.e., a static system could be seen as sustainable, a growing system too (if not upper bounded), a degrading system as well (if not lower bounded).

(2) What is the definition of “Sustainable” exactly? One way I see it is that it is "something that can be kept in existence".


It is already challenging enough to get the first table right (I know at the time of writing it might not be), let alone the second one as well. The sketches below somehow help me abstract these concepts:



I am not sure what the charts explain as of now: the evolution of the system's <entropy|quality> over time? When looking at all these charts, however, it seems I have to concede to two points I already know:


  • In complex systems, there is constant novelty / emergence and, therefore, predicting the future state is not possible.

  • Because of the above, it is better to monitor the current state and intervene only if necessary.


I purposely left the sketches for the Sustainable behavior out, for reasons related to the notes of the second table. One final remark. In this brain dump, I am not merely trying to categorise what I know or adopt a reductionist approach to explanations. I am trying to represent and describe a concept (or multiple concepts?) for which I am seemingly missing the correct “language” but for which I am certain there is a proper and, perhaps, more formal and fundamental explanation.


Picture created by the author using GenAI

 
 
bottom of page