More on Fundamental Information & Computation

An information-and-computation-based metaphysics is fundamental to several recent sources (as well as my own metaphysics) – even if the specialist scientific researchers often don’t concern themselves with metaphysical aspects of ontology. Most recently John C Doyle and Mark Solms for example – information and its processes are simply taken as more fundamental than any other aspect of physics. The self-organisation and emergence of multi-layer systems and their architectures, Markov-blankets, active-inference, systems-thinking and the like then explain how more recognisable physical (and living and conscious) things arise and behave.

Sadly there’s a “bug” in orthodox science (Doyle) that rejects these descriptions where objective chains of causality appear to be broken by abstract patterns of information and which naturally bring the subjective aspects of consciousness into consideration. Empathising with the subjective is a “Rubicon” orthodox science needs to cross (Solms).

So.

“Minimal physicalism as a scale-free substrate for cognition and consciousness.”

– Chris Fields, James F. Glazebrook and Michael Levin

Is a paper from an August 2021 special edition of the journal “Neuroscience of Consciousness” – citing several authors already relevant here – and referenced earlier by Anatoly Levenchuk as being relevant to systems thinking and an information-and-computation-based metaphysics. I’ll say, (MP = Minimal Physicalism) it concludes:

In direct contrast with strict Cartesianism, MP holds that we can better understand our own awareness by understanding the awareness of our more basal cousins. Our homeostatic/allostatic drives and the mechanisms that satisfy them are phylogenetically continuous with those of prokaryotic unicells …

… The tradeoffs that we implement, and adjust in real time, between perception, memory, and planning are tradeoffs that have been explored and adjusted in niche-specific ways by all organisms throughout evolutionary history. We can take advantage of these fundamental mechanistic similarities to design theoretical and experimental paradigms that reveal and assess scale-free properties of consciousness in both natural and engineered systems.

Note:

“better understand our own awareness by understanding the awareness of” more primitive organisms … “both natural and engineered systems”.

No real review, just some extracted highlights which I can link to previous work here:

MP is Minimal Physicalism – That is, there are no physical assumptions beyond quantum information theory. (Not sure why “quantum information” specifically – but certainly information theory/ies more fundamental than anything else in physics.)

All physical interaction IS information exchange. (Agreed)

There is no Hard Problem. That is HP is not a problem to be solved, rather a set of inhibitions to be overcome. (Absolutely! It’s only orthodox science’s denial of subjectivity that gets in the way of explanatory understanding – see Solms above.)

There is no Combination Problem of psychist / subjective elements. (Ditto. Never was!)

“Bow-tie” systems topology. (Interesting. Something I’ve used in real world systems engineering before, and which Doyle’s work  often uses. Maximum diversity in higher and lower layers, minimally diverse, exploitable bottlenecks, in middle layers. Everything comes in threes, even individual layers.)

Markov Blankets (MB) – both Pearl and Friston forms covered. Also Tononi. (but no Dennett or Doyle). Also Boltzmann (1995 !*) Hacker, Heilighen, Damasio and Csikszentmihalyi (flow!) (* 1844 – 1906)

Not just Homeostasis (steady state) but Allostasis predictive of future demands.

Many testable “predictions” (the point of this paper?) … including
– use of “Quantum Zeno Effect” (Henry Stapp is also a joint reference)
– interoception and “the self“. (More Solms).

Prediction 15: The “self” comprises three core monitoring functions, for free-energy availability, physiological status, and organismal integrity, and three core response functions, free-energy acquisition, physiological damage control, and defense against parasites and other invaders. These will be found in every organism. Indeed they are found even in E. coli, which has inducible metabolite acquisition and digestion systems (Jacob and Monod 1961), the generalized “heat shock” stress response system (Burdon 1986), and restriction enzymes that detect and destroy foreign, e.g. viral DNA (Horiuchi and Zinder 1972). All of these responses act to restore an overall homeostatic setpoint, i.e. an expected nonequilibrium state; hence they can all be viewed as acting to minimize environmental variational free energy or Bayesian expectation violation. (Friston 2010; 2013).

Templeton funded – Christian religious funding an interesting feature common to much research that questions fundamentals of science itself.

Zience and John C Doyle

Further to the previous post, let’s try and elaborate some specifics of what John C Doyle has to say. What is clear, after the throwaway “scientists will hate this” remarks, is that this is the reason he is unpublished in more popular journals and publishing formats. Because he is pointing out “a problem with science” he is experiencing resistance to getting published.

[It’s interesting in Gazzaniga, where I first came across Doyle, the most interesting read was the more autobiographical “Tales from Both Sides” which I originally understood to be a reference to the two sides of our bicameral-mind / divided-brain, but which was in fact a reference to the politics between researchers with unpopular findings and the story of whose work got published with which content, and Sperry who eventually won the Nobel Prize. I’m not, never have been, a conspiracy theorist. The institutional defence mechanism is a bug in scientific thinking, not some nefarious active conspiracy of secret interests. Essentially the bug is ignorance of the multi-layered architecture of “systems thinking” which is artificially flattened into one-dimensional logical objective “rationale”.]

It is a seriously degenerate problem, which is winning, because it’s self-reinforcing and we humans are poorly evolved to resist it. The bug is like a virus exploiting rational human weakness. Multiple timescales are part of the problem too – from speed of light global comms, to the pace of biological evolution, and the enormous range of calendar-based individual and collective human activities in between.

This is a disaster bigger than Anthropogenic Climate Change, not least because Zombie science, Zombie law and Zombie politics compromise our chances of successfully addressing it.

In this October 2021 presentation, Doyle gives us his take:

Ironically, Dan Dennett was one of those who used real virus-driven behaviour to illustrate issues the “Four Horsemen” had with religion in the early 21st C religion vs science wars. The classic example being a parasitic fungus that behaves virally, its spores infecting the primitive nervous system of a particular species of wasp, so that it not only effectively kills the insect, but changes its behaviour to ensure it is eaten by large mammalian hosts when it does die – a massive resource to multiply the virus numbers and spread them through the host population. A neat viral trick. (See Cordyceps if that’s not already familiar). Doyle uses exactly this example, and more classic variations – like the virus infecting mice which reduces their fear of predators like cats. Same propagation trick.

Zombies were a popular meme in philosophy – a thought experiment – about whether organisms’ (like humans’) behaviour reflected internal knowledge of what they were doing and why. How would we know if they had any internal sense of self? These virally compromised insects and mice also became known as Zombies for their so obviously self-disinterested behaviours.

Doyle’s contribution is Zombie Science or “Zience”
(Rough paraphrase 15 mins from ~28:30 to ~43.00 in this presentation.)

Vaccines – in the biological and social sense – are an example of a “Diversity Enabled Sweet Spot” in the enormous stack of human systems. As we have seen with Covid, the medical science is only one small part of the stack from Policy setting and enforcement, the medical processes and procedures, virus mutations, to the levels of individual and social psychology and behaviours. Many layered and massively complex, massively distributed asynchronously around the globe.

But that’s just the warm-up. Here’s the big thing.

Things are going wrong. And things are going to get worse. And almost everything we are doing with IT/Comms networks – like “Digital Transformation” – are actively making our problems worse.

We really need to understand fundamentally what is happening, not just anecdotally, individual examples.

Viruses exploit the universality of operating system architectures. And viruses rule – they kill HALF of their hosts everyday (most of those hosts are bacteria and other single-celled creatures).

As well as Viruses, we also have more active predators in our systems – Malware. Social Media is itself the most important Malware.

The awful thing about our most recent viral experience across all these levels is that it reinforces existing inequalities (race, wealth etc.)

Language itself is hijackable – it’s an important part of our operating system – we have many issues around the globe where exactly that is happening. Zombie memes. Contagious misinformation – false, unhealthy and dangerous. Previously ‘solved’ human rights and freedoms problems are coming back as well as new ones.

And science is not immune. Zombie Science.
It’s own self-correcting processes are not protection against the problem. Science will in fact reject all these multi-layer / diversity arguments. We are losing this battle. Good science is NOT winning the war against “Zience”.

Legal systems too. Laws and enforcement.
Zombie Law too. Unintended consequences. Zombie corporations, Zombie capitalism … endless.

It’s the architecture, NOT the individual viruses or humans.

I feel I’m fighting in the same trenches as John Doyle.

(Also note the significance of “Diversity” in the “Systems Level Synthesis”.
“Vive la Difference” as I so often say. Our systems will always have layers to be practically functional – fast and accurate enough – a single layer system can never work. But such a system will always have a “diversity enabled sweet spot” and many layers will be virtualised relative to the explicit layer in which “we” operate. These are vulnerable to viral attack, and we need to ensure we don’t lose sight of what matters in each layer so we can protect & manage them, not allow them to become Zombies.)

We need systems thinking – about the right things in the right layers in the architecture – not about all the “objects” (individuals) in the system and their direct logical / causal relations in the explicit layer. We need to consider and protect against viral fragility in the virtualised layers.

As in the preamble note above, the “bug” in science – and the reason Zombie Science is not helping us solve this problem – is that it rejects independent causality in multiple layers – flattens everything into one layer of explicit objects.

Good science
is NOT winning the war
against “Zience”.

Following the science can be dangerous.

=====

 

Scientists Will Hate This

I mentioned John C Doyle as a candidate for a new real-life (living) “hero” in my research quest here in 2019 and again here in 2021. I say “new” hero because my long term hero has been Dan Dennett. Of course since then, both Iain McGilchrist and Mark Solms have taken up a good deal of attention with their own heroic contributions, but I mentioned Doyle again the other day in an exchange with Anatoly Levenchuk and “Systems Thinking”.

(Doyle is actually a reference in Gazzaniga who – like Sperry – is an important source in the cognitive science space, used directly by McGilchrist and many others, but not Solms so far as I can see. It’s how I first came across Doyle.)

I sensed, and still sense, that his work is going to prove important to how architectural systems thinking is applied to everything from fundamental physics to global human issues as well as brains / minds and IT/Comms networks. Trouble is, he admits, he’s very “disorganised” – unstructured presentations (oral and slides)  given to technical audiences and files on public shared drives. He’s prolific, but it’s all papers written with collaborators and students, no book(s) beyond his original control systems specialist field, with no obvious indexing or structure to his topics. In a sense that’s probably justified by the content of his current subject matter which demonstrates “universal” trade-off features of all multi-level systems – almost all his graphic abstractions are versions of each other.

I had already shared this presentation:
John Doyle, “Universal Laws and Architectures in Complex Networks”
March 2018 @ OFC Conference

Anatoly shared this recent one (as well as many papers, and in fact Doyle drops many paper references into his presentations, acknowledging his student contributions):
John Doyle – “Universal Laws and Architectures and Their Fragilities”
October 2021 @ C3.ai Digital Transformation Institute
(And this folder of public papers highlighting Social Science Architecture and Systemic Fragility.)

Now there is a thread of overturning scientific orthodoxy running through all the above, counter to the received wisdom of logical objective rationality of causal chains where wholes are reduced to summation of the history of their parts. In doing so, ignoring (a) ergodicity, that not just the end states of individual parts, but their network of paths through possible histories affects the whole outcomes, and (b) strong emergence, that wholes have their own properties and causes not causally determined by their parts.

At the level of political and aesthetic endeavours, no-one would bat an eyelid. The problem is, the problem exists in would-be science too and rational thinking more generally.

Dennett – warns scientific types to avoid greedy reductionism, and to suspend disbelief and hold-off on definitively objective definitions as rational arguments themselves evolve over repeated cycles.

McGilchrist – having debunked cortical & hemispherical misunderstandings of how our brains and conscious minds evolved to work, pleads for recognition of the naturally sacred beyond the reach of our orthodox (objectively verifiable) scientific model of reality.

Solms – having debunked cortical and mid-brain misunderstandings of those same brains and conscious minds and having established the basis of consciousness in subjectively felt experience and their evolved existence as distinct causal entities through fundamental information computation processes, makes a plea for objective scientists to cross the Rubicon to take-in the view from the subjective side.

Doyle – whose work arises explicitly from IT/Comms&Control computing networks, demonstrates repeatedly, with all manner of real-word examples (from talking or riding a bike to mobile apps & social media), that there are universal abstraction features of multi-layer systems network architectures that mean the virtualised wholes do more – better, different – than any of their parts. “Scientists will hate this!” he repeats in throwaway remarks to his technical audiences, recognising that strongly emergent causal identity of virtual entities is contentious for objective science, STEM and Engineering. (Slightly infuriating, unlike the better informed brain scientists above Doyle uses “cortex” as shorthand to mean that part of the human system inside our heads. The cortical fallacy.)

There is a common “problem with the received wisdom of orthodox science” running through all of this, and a lot of “systems thinking” and “information processing” common ground in where the problems arise.

It’s a “bug” in the received wisdom of “science-led” human rationality. The one that’s been driving this Psybertron project for 22 years.

We’ve barely scratched the surface with Doyle, I’ve mentioned elsewhere that in his terms this problem really is a bug. Viruses are especially adapted to hijacking vulnerable layers in multi-layer-architected complex systems, without needing to carry the overheads of more complex organisms such as ourselves and our social organisations. Humans are particularly badly adapted to deal with viruses that work against human interests – especially memetic ones in society’s information and communication layers. Our social systems – including science – are much more fragile than our rationality admits. Unless we want to give-up on humans and declare viruses and the simpler single-celled organisms as “the winners by headcount” in the cosmic game of evolution we need to find memetic vaccines that work.

(With Anatoly’s help) I need to dig further into Doyle.

Camille Paglia – Sexual Personae

Received and started reading Sexual Personae by Camille Paglia. The book that made her famous as a radical feminist but who also “identifies” as transgender and has been a critic of post-modernism’s consequences. Clearly someone of intellectual subtlety – and balls – on a topic that exemplifies our (21stC) modern polarisation predicament at a time when we desperately need careful discourse to make progress.

(Hat tip to Lila @commonclione for the recommendation).

Only just started the read, and already loving the style, so I expect I will digest the whole. Here an early sample:

Western love is a displacement of cosmic realities. It is a defense mechanism rationalizing forces ungoverned and ungovernable.

Sex cannot be understood because nature cannot be understood. Science is a method of logical analysis of nature’s operations … But science is always playing catch-up … Science cannot avert a single thunderbolt. Western science is a product of the Appollonian mind: its hope is that by naming and classifying, by the cold light of intellect, archaic night can be pushed back and defeated.

Name and person are part of the west’s quest for form. The west insists on the discrete identity of objects. To name is to know; the know is to control. I will demonstrate that the west’s greatness arises from this delusional certitude.

Our delusional certitude. Spot on.

The traditional contrast to Appollonian is Dionysian, but she uses “chthonian” instead – from the bowels of the earth. Being post-modern whilst criticising post-modernism is the trick. I call myself PoPoMo (post-post-modernist). The naming and classifying problem is my #GoodFences vs #IdentityPolitics agenda. Lots to look forward to.

The Emperor’s New Markov-Blankets?

I mentioned in my review of Anatoly Levenchuk’s “Systems Thinking 2020” having some subsequent dialogue about common ground in other areas of the Psybertron agenda. A significant overlap is the work of Karl Friston (Free Energy Principle / Markov Blankets / Emergent Organism / Active Inference) in my reading of Mark Solms, and in Levenchuk’s case, where he and Friston are both members of the advisory board of “The Active Inference Lab”.

[Small world in itself – and yet in the days since, the concept of “systems thinking” is everywhere, from politics and biology, to consciousness and metaphysics. This is not going away. It was xxxx noticed back in January? I’d slipped into systems language quite naturally into ongoing dialogues. And, I made quite a thing of “systems architecture” considerations when interpreting both Solms and McGilchrist (independent) work in terms of (say) anatomical and functional brain architecture.]

In the dialogue above, Levenchuk shared a paper appearing to cast doubt on Friston’s use of Markov Blankets – “The Emperor’s New Markov Blankets” – ENMB (2021) for short here. Full refs:

      • Bruineberg, Jelle and Dolega, Krzysztof and Dewhurst, Joe and Baltieri, Manuel (2021) The Emperor’s New Markov Blankets. Behavioral and Brain Sciences 1-63. [Preprint] doi:10.1017/S0140525X21002351
        or
      • Bruineberg, Jelle and Dolega, Krzysztof and Dewhurst, Joe and Baltieri, Manuel (2020) The Emperor’s New Markov Blankets. PhilSciArchive [Preprint]

(More dialogue below in the Post Notes.)

START

It’s a substantial paper, 48 pages with some pretty heavy maths as well as arguments of principle. In fact when I read the parts of Solms where, amongst other things, he used (Freudian) mathematical notation additionally developed with Friston, I noted that it was perfectly possible I wasn’t properly understanding Friston’s arguments. Whilst it chimed intuitively with my own understandings, I wasn’t well placed to say whether it was formally right, one way or another – an occupational hazard in this kind of multi-disciplinary research.

“I’ve also kept in [my reviews] lots of technical specifics which I probably don’t understand as Solms intended, primarily to allow me later checking against other resources” (Myself, earlier.)

Well here is an opportunity 🙂 to respond to the “ENMB Paper” quoted directly below:

“This web of formalisms (Free Energy Principle, Markov Blankets, Active Inference) is developing at an impressively fast pace and the constructs it describes are often assigned a slightly unconventional meaning whose full implications are not always obvious. While this might ironically explain some of its appeal, as it can seem to the layperson to be steeped in unassailable mathematical justification …”

“We will argue that although this approach might have interesting philosophical consequences, it is dependent upon additional metaphysical assumptions that are not themselves contained within the Markov blanket construct.”

“In our view the FEP literature consistently fails to clearly distinguish between the ‘map’ (a representation of reality) and the ‘territory’ (reality itself). This slippage becomes most apparent in their treatment of the concept of a Markov blanket.”

“… a broader tendency within the FEP literature, in which mathematical abstractions are treated as worldly entities with causal powers.”

“[Friston’s is] a new and largely independent theoretical construct that is more closely aligned with notions of sensorimotor loops and agent-environment boundaries.”

“Inference within a model, as opposed to inference with a model, seeks to understand inference as it is physically implemented in a system, and places literal Markov blankets at the boundary between the system and its environment. The ‘model’ within which these Markov blankets are used is usually understood ontologically: here the map is the territory – the system performing inference is itself a model of its environment, and its boundary is demarcated by  Markov blankets.”

“This procedure of attributing to the territory (the dynamical system) what is a property of the map (the Bayesian network) is a clear example of the reification fallacy: treating something abstract as something concrete (without any further justification) … we propose to distinguish between ‘Pearl blankets’ to refer to the standard ‘epistemic’ use of Markov blankets and ‘Friston Blankets’ to refer to this new ‘metaphysical’ construct. While Pearl blankets are unambiguously part of the map (i.e., the graphical model), Friston blankets are best understood as parts of the territory (i.e., the system being studied).”

As a general rule, one should not mistake the map described by a model for the territory it is describing: a model of the sun is not itself hot, a model of an organism is not itself alive, and so on.”

[ENMB Paper]

OK, so again, without going through any of the mathematical rigour – itself unassailable by me – an important issue is indeed covered by the extract above, that there are metaphysical (ontological) premises possibly unstated in the work of Friston (and  Solms), that might “appear to break this general rule without any further justification”. However these are quite explicit here.

Solms’ own response is categorical without any further metaphysical justification.

I have read [the paper]. I don’t think the Markov blanket formalism is a map of a territory but a description of the causal dynamics that actually exist in a territory. The territory in question is the (monist) functional organization of both brain and mind.

The ‘territories’ are the observable mental and neural phenomena. What they are calling the ‘map’ is, for me, the underlying functional system that explains those phenomena. This explanatory level (the functional organization of the system) cannot be observed; it must be inferred.

It is a dualist position. The formalism describes the actually causal ontology. As Galileo said: the book of Nature is written in the language of mathematics.

(Solms in Twitter exchange.)

I don’t buy the Galilean / Platonic argument as definitive, but it reinforces that this is not an accidental error in this school of work, but a deliberate act that needs to be understood as such. Sure the “book” of nature may be written in maths, but maybe not “nature” itself?

Good question, from any small boy not seeing the emperor’s clothes.

But it’s not necessary to analyse exactly what Galileo, or Plato before him, was asserting in any specific detail. That general rule of not confusing the map with the territory – not falling for the reification fallacy – is good advice and indeed is ancient advice. A Buddhist might point out that “the finger pointing at the moon is not the moon”. In science generally, our models in mathematical constructs used to represent, analyse and predict data about the real world are contingent approximations to the behaviour of that real world, but they are not it. Physics isn’t the real world, it’s our best current model of it. In any number of more mundane engineering applications, especially those that get implemented in analytical and operational computing applications, we constantly have to remind ourselves that the model is only a model, not the real thing, however seductive the virtual reality might be.

Dennett (much cited here) is among those acknowledged as providing advice to the ENMB paper, without any specific reference and, given his views on disembodied information and computation (**) – independent of any physical layer – in his own “evolved consciousness” story, I’d be interested to know his actual views on this argument. I’m pretty certain he uses very similar arguments to Friston and Solms, as I do too.

What we are saying is that in this model, the computation (**), the sensing of information and algorithmic processes of the systems and subsystem components, with and without Markov-blankets, is quite literally happening. These information entities and processes are more fundamental than the physical models which self-organise and emerge from them. This is indeed a metaphysical claim, whether or not explicitly stated as such by every user.

In these theories, in my own metaphysics as well as Solms says above, the information processing (**) is the territory, the foundation of the territory itself not just a map of it. Though obviously like any model we have also plenty of other information representations used to describe and present (map) the model and its processes to human audiences.

Friston and Solms (and myself) are not unique here, as the ENMB paper acknowledges, there are many philosophers and cognitive scientists with information-and-computation-based ontologies of reality. Integrated Information Theory (IIT after G Tononi) is one well developed example, but these are part of a wider movement. One corollary of these foundational (metaphysical) information-based ontologies is that both the physical (body) and mental (mind) worlds and their causal relations are explained by the same underlying metaphysics. A credible monism where dualism has stubbornly continued to exist. (Also a lot of new interest in various versions of pan-psychism in the 21st C, and again these theories provide an information-based “pan-proto-psychism” that may better support these.)

In many ways it’s good that the ENMB paper exists, because it is ringing an important alarm bell that more people in both science and philosophy should wake-up to how radically important these not-so-new theories are.

Thanks for the warning ENMB, but what you are describing is exactly what we’re doing.

END

=====

In a similar critical vein to the ENMB Paper, this newer one on FEP

(**) And Yogi Yaeger’s paper. “Natural Information Processing” as opposed to formal “Computation”. Not be confused – the formal term “computation” associated with the “computability” of Church, Turing, Shannon et al and the natural language term for “processing information”.

=====

Post Notes:

One source here on Psybertron that I’ve not really developed yet is John C. Doyle, a control systems guru I’ve mentioned being impressed with before – he’s written the text-books and is much cited in papers – but he hasn’t written for a generalist public so he’s quite low profile if it’s not your field. He’s very much a systems thinker looking for architectural abstractions yet using very real-life examples to illustrate. In this 30 minute talk – very dense / terse / rushed, packed with content easy to miss if you’re not concentrating – the last 15 minutes is very interesting. Very clearly joining up issues of multi-layer systems optimisation and evolution (Levenchuk) with human situational awareness and responses based around the visual field and the speed of saccade eye-movements (Solms)?

Anatoly Levenchuk Comments:

2. There are not only “map-territory” distinction and representation relation that may be confusing. There are functional object — physical object distinction with implementation/realization relation. And you should decide: what type of relation each of authors mentions.

IG: Ah, yes. Not always explicit in every discussion, but pretty fundamental that the systems / architecture view is functional – you maybe saw my comments on trying to get a Brain Atlas that held to this schematic view. See later comments on process-based relations.

3. A day ago I did post about phys-math-modeling and compactification/universalization of knowledge. It suggest more long chain of ontology modeling: physical object from domain that is classified/annotated by type of physical object (ToPO) from physics textbook and then this ToPO is represented (or classified/annotated, if you prefer it) by mathematics/abstract object. I am not mention about functional object option here, it is enough complicated with this. Mathematics is foundation and upper ontology, physics is middle ontology, domain objects is working ontology.

Thus you can parse phases like “As the locus of molecular, thermodynamic, and bioelectric exchange with the environment, the cell membrane implements a Markov Blanket (MB) that renders its interior сonditionally independent of its exterior (Pearl 1988; Clark 2017); this allows the cell to be described as a Bayesian active inference system (Friston 2010, 2013; see also Cooke 2020 for a variation on this approach)” — this “implements” means classification relation (but easily you can go along 3D extentionalism and try functional-physical object “implementation/realisation”).

Phrase I took from https://chrisfieldsresearch.com/min-phys-NC-2021.pdf
My text (sorry, in Russian) here: https://ailev.livejournal.com/1621997.html

IG: Excellent. The chain of causality, with emergent layers separated by Markov blankets is the model I’ve had in mind all the way through – even before I’d consciously heard of Markov blankets 😉 As you know my interest is going back to metaphysical foundations, but yes, even with a functional bias / preference we must get to the functional-physical realisation in the real world. Think I’ve come across Fields and Glazebrook before but yes … at root in my model, “(All) physical interaction is information exchange”. (That’s precisely why information is more fundamental metaphysically 🙂 )

4. “, the information processing is the territory, its foundation not the map of it, though obviously we have plenty of other information representations used to describe and present (map) the model and its processes to human audiences” — you refer here to “information processing” and I can point you to:

— “Integrating information in the brain’s EM field: the cemi field theory of consciousness”, https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7507405/ — “I describe the conscious electromagnetic information (cemi) field theory which has proposed that consciousness is physically integrated, and causally active, information encoded in the brain’s global electromagnetic (EM) field. I here extend the theory to argue that consciousness implements algorithms in space, rather than time, within the brain’s EM field”.
— “Types as Processes, via Chu spaces”, We match up types and processes by putting values in correspondence with events, coproduct with (noninteracting) parallel composition, and tensor product with orthocurrence. We then bring types and processes into closer correspondence by broadening and unifying the semantics of both using Chu spaces and their transformational logic. Beyond this point the connection appears to break down; we pose the question of whether the failures of the correspondence are intrinsic or cultural. — https://www.sciencedirect.com/science/article/pii/S157106610580475X?via%3Dihub (и дальше по этой линии Information flow in context-dependent hierarchical Bayesian inference, https://chrisfieldsresearch.com/contextual-pre.pdf)

IG: Thanks for these. I’m sceptical of “CEMI” and I don’t consider it necessary – it’s an information field, whatever the physical substrate, EM or otherwise – but I may follow-up, since Chris Fields is someone I have time for. Thanks.

Hope I paid my debt of commenting your posts )))
Sorry, but I am not sure that tell you something substantional about Markov blanket. Your post show that you already understand difference between abstract MB и physical one, I can only add complexity of ontology choices with functional object variant )))

IG: Actually – I was hoping you’d comment on the two prior posts on Solms – but YES – we have effectively covered the same ground. I appreciate your confidence I’m understanding this – or that at least we’re both misunderstanding it the same way 😉 Many thanks for the dialogue.

In my view “process” (something that flow, functional diagrams always with some flow/current in it) is good heuristic that we deal with functional (run-time) objects, not physical/product/module (construction time) objects.

I often have talks with “only process” people (e.g. category theory or other “transformations first”). Common thinking is about objects (of attention!) and relationships (processes) and IMHO this is supported by wetware in a brain. To the thinking you need both, but with only objects it became metaphysical (especially if you have no at least 3 timescales — evolution, learning/adapting and run-time) but with processes only you have absence of attention anchors for wetware. Therefore both but things first, process second.

4D dimentionalism is good for integration of object vs. process false dichotomy. Process is related not only time and function, but also space and physical objects!

IG: Yes understand that 4D model and that in the real day-to-day world we interact with space and “things” – but I am (after Whitehead) being quite radical here – metaphysical again 😊. (These “things” only emerge from networks of “events” at the information level … longer story).

Yes, works of John Doyle is relevant here — https://scholar.google.com/citations?hl=en&user=C6DtGmMAAAAJ&view_op=list_works&sortby=pubdate

IG: In my more general public dialogues – as opposed to researching technical papers – I find very few who have heard or or understand Doyle’s multi-layer architectural-optimisation. Once we accept layers (Markov blankets) as REAL, then I think his view is VERY IMPORTANT to so much evolution / self-organisation of ALL systems. This is amazing convergence from the practical engineering level right back to fundamental physics – as information and computation 🙂 Many thanks again.

Euclidian Points / Democritan Atoms

Another brief “hold that thought” post.

My own metaphysics is based around atomic quantum points that I think of as truly Democritan atoms – genuinely indivisible, inconceivably divisible “a-toms” – without any parts or even properties of their own. Everything of any significance in the world – everything full-stop – is about relations between these.

What I know now, thanks to a question in the University Challenge final last night, is that Euclid defined his conception of a point as

“a point is … that which has no part”

‘Twas ever thus. Nothing hew under the sun.
(Democritus preceded Euclid but was still living when the latter became active?)

Citizens’ Assemblies / Conventions

Just a brief note – to recommend this edition of BBCR4 Positive Thinking. Citizens’ Assemblies and a rolling Citizens’ Convention are an idea I bought into over a decade ago. My logic is this:

Democracy appears broken.

Democracy of some kind (after Churchill) is nevertheless the best – or least worst – system available.

Therefore we need to fix democracy, not reject it, not throw democratic babies out with the bath-water.

We need to add more representation that works.

Mostly we already have two or three level systems – Head of State / Lower House / Upper House, two of which are “executive” – but the most directly representative of these consider public representation on fixed election cycles – cycles which are too long for changing world events, but too short for proper long-term values and investment in priorities.

Rather than tinkering with these established institutions and their election / voting arrangements directly (see “babies & bathwater”), the proposal here is to add another one or two levels of Citizens’ Assemblies underneath these. To increase public engagement between election cycles and to manage & maintain priorities beyond these.

Now, there are arguments against – about the self-selection of those that actually get engaged – partisan / activists – vs the professional knowledge and commitment needed, that the fact that any influence may be toothless even though placed on record. We already have standing “parliamentary committee” systems that partly address the same issues? So clearly it’s important they’re not allowed to become redundant box-ticking activities, that engagement is supported by genuine commitment and resources, and so on.

BUT whatever their drawbacks, the increased engagement is encouraged – never a bad thing – AND, most importantly it properly recognises and reinforces a multi-layered, self-organising, Systems Thinking approach to the most complex problem facing humanity. Governance.

The Belgian approach described in this programme has several innovations – clearly two-time-scale / two-level ongoing convention and periodic assemblies and some quality thinking on how arrangements can avoid the pitfalls. Recommended. Worth a listen.

“Definition as a Coffin” – Cybernetics to Systems Thinking

Definition as a Coffin?

“Hold your definition” is a plea by philosopher Daniel Dennett, often cited here on Psybertron, when dealing patiently with his scientific friends. Any discourse that starts with apparently clear definitions, manipulated solely by logic, is inherently limited by the fit between the history of those definitions and future of reality. At best, definitions are tentative outcomes from any discourse of any complexity.

My mind was caught this week by the idea of definition as a coffin for what Anatoly Levenchuk calls “dead-think” in his book which forms the basis of this Systems Management School course on “Systems Thinking 2020”.

[Aside – an important post of mine from 2015 discusses the temporary / contextual / contingent nature of objective identity-based definitions, anywhere from physics to politics.]

More on that later, but first, how did we get here?

The Circle from Cybernetics to Systems Thinking

The “Cyber” root has been behind the Psybertron project since I started it 22 years ago, with the rhyming “Psy” prefix emphasising the psychological over physical perspective, and the “tron” alluding to the increasing electronic automation context of our 21st century journey into “What, Why and How do we Know?”. A project triggered by the increasingly despairing sense that what is “known” has a much more significant psychological aspect than the received wisdom of the objective “STEM” sciences had us believe in the previous 20th century. That, and the sense of the inevitable, that algorithmically automating this stuff – without first addressing this problem – could only make it worse.

The 21st century experience of free, ubiquitous, electronic communications certainly bears-out those fears, but little did I know. Garbage in, more extreme garbage out, as they say, even in machine-learning / AI?

I’ve recapped the place of Cybernetics and Systems Engineering / Thinking in the project several times over the years. It was July 2002 I first made the Cybernetics connection explicit and noticed that lo-and-behold the original intention of those that invented it – the 1946 Macy conference with Wiener and von-Neumann – was that it concerned human decision-making and human systems of governance from the start. I was taking this human psychology angle for granted (above) in my own philosophical researches. It was January 2012 before I was prompted to go back and read Wiener’s original 1948 Cybernetics. And even later in January 2018 before I noticed that this human cybernetics had been dubbed the Second Cybernetics as long ago as 1963 since those first working with it in early systems engineering and electronic computing applications had clearly forgotten what the human originators intended by “kybernetes”, the root of governance.

Anyway, as I say, it’s not the first time I’ve recapped this story, most recently with this (March 22) reference and this (August 2021) reference in which I made the Systems Engineering to Systems Thinking connection explicit here. Having been an engineer working in systems of many kinds my whole career since the 1970’s “systems engineering” was informally central to everything anyway, implicit even as I was working the day job in the engineering of electronic information systems explicitly.

In that post I acknowledged …

Anatoly Levenchuk, the then chair of the INCOSE Russian chapter, and his colleague Victor Agroskin, still the smartest people I ever met anywhere in any context.

… as the people that first made the Systems Engineering (now Systems Thinking) explicit for me, as the topic under consideration. The English text of Anatoly’s latest book, mentioned in the introduction, is intelligently browsable on-line here, once you’ve registered for the Systems Thinking course. There is also a downloadable PDF of the December 2021 text. (Personally, anything over a few pages, I still prefer to read and review actual books, but let’s see how we get on. This is a 358 page book.)

Initial Review

As I write this I’ve only read and skimmed parts of Systems Thinking 2020, but given this and given the above, it is already recommended.

Firstly, the 358 pages are all content. Apart from the Table of Contents, there are no “end materials”; index, bibliography, references or notes to give any clues. All additional resources – and there are many – are linked within the text. (I often prefer to compare notes on these before I read any non-fiction book in full.)

Also, in my experience, idiomatic Russian is handled very badly by things like Google Translate and working with smart people like Agroskin and Levenchuk in on-line text and blogs has proven too hard except where they were doing their own real-time translation of their Russian thoughts into oral English for me. The good news is the English translation of the book is human (by Ivan Metelkin) and whilst additional native-English speaking editing will no doubt further improve the read and clarify intent, this text is entirely intelligible.

Details, Details.

I picked-up early on Levenchuk’s focus on pragmatism and practicality. One of the earliest philosophical things I wrote (2006) after more than a decade of modelling dictionaries of terminology for systems engineering purposes was a recognition that, whilst many of the problems with meaning (epistemology) involved more philosophical abstractions, that project was primarily pragmatic – for use by engineers on deliverable projects.

The principle concerns were ontology, a model of what existed, based on pragmatic interpretations of classification and set-theories, avoiding over-reaction to such anomalies as Russell’s Paradox, so that anything useful could be said about anything. That work was of course primarily pragmatic.

At first sight this looks like the age-old “perfection is the enemy of the adequate” which can endanger the delivery of any project, but in fact Levenchuk points out that this is a misunderstanding about levels of thinking that need to be recognised as distinct. In very much the same way that Systems Engineering might appear to have morphed into Systems Thinking, in reality these are distinct areas (layers) of consideration:

      • Systems Project Engineering
      • Systems Engineering Thinking
      • Systems Thinking

Levenchuk’s style is to provide the reader / trainee / user with a “cheat sheet” – a prescriptive procedure and advice for practical use, as well as providing rationale and background on development of the methodologies and the supporting education and training resources. But it is vitally important the right cheat sheet is applied to the right task. Systems Thinking is not a substitute for engineering project execution best-practices. What it is, is a methodology for helping shape, define and prioritise aspects of a complex project, or architecting a programme or system of future activity. Deepening understanding and knowledge of such activities, quite distinct from simply “doing” them. Knowledge and understanding whose value materialise should that doing meet unexpected issues and future opportunities. (Significantly “surprise” – the sensed gap between expectations and reality – is fundamental to the “Active Inference” school of Systems Thinking – more later.)

Quite recently here, I speculated on a more sinister take on the “devil in the details”, but Levenchuk provides clarity on the distinction between:

The devil in the details and
an angel in the abstractions.

We need both in different places. The architecting requires knowledge and understanding of the abstractions and which details should be ignored and which are insignificant to that task. The execution requires practical knowledge of more of the details. (In my own post above, the last line acknowledged that when it comes to details what we’re missing are relevance and appropriateness to the matter in hand. Systems Thinking addresses this.)

This is a book about the thinking in advance of the doing. Shaping or architecting a plan for the doing, but neither the plan nor the doing per se.

[I recall many examples of working with planners and project engineers who didn’t get this and forced inappropriate detail just because they could. eg “I know from experience and documented best-practices that our plan will need to include this, this and this, so I’m not going to let you ignore them now.” – sigh!]

Complexity

Complexity is an explicit topic from the outset, in the opening sentence of the introduction:

Systems thinking helps to solve complexity in a variety of projects: it makes it possible to think one at a time about everything important, temporarily discarding the unimportant, but without losing the integrity of the situation, the interplay of these separately thought-out important moments, systems thinking manages attention in complex collective projects.

The idea discussed above, of managing attention to which details are appropriate and relevant where and when are in that first sentence – it’s intractable to think about everything, everywhere all of the time in a complex situation. It’s why, in my own work, I think architecturally. There’s a whole section On Thinking in complex situations generally, which prompted my attention on “definitions” when I first skimmed the book.

Having everything well defined is really only a feature of closed systems where the scope and complexity is relatively simple and amenable to consideration of all details being known in advance. (ie no surprises)

Real projects, real-life human endeavours on any scale are not only complex but because of that complexity they are also effectively open systems. Systems some of whose sub-systems and components will arise from considerations outside the intended scope of the endeavour.

There is a tendency to think of definitions of objects of interest / within scope of any endeavour in terms of establishing well-defined terminology, as critical or fundamental to that endeavour. In fact data dictionaries and class libraries would appear to be predicated on that presumption.

Definitions

Levenchuk has sections on Terminology in his On Thinking chapter, entitled

    • “Words-as-Terms Are Important and Unimportant”, and
    • “Definition: as a Coffin for a Dead Think”

The latter is a play on (or mistranslation from?) Russian philosopher Shchedrovitsky who said “A definition is a coffin for a dead thought”. As noted above, I’d like to think US philosopher Dan Dennett would agree. So long as there is still thinking to be done, a definition of a term referring to the concept of an object in the real world, is little more than a placeholder. In systems thinking, there is always thinking to be done. So much so that Levenchuk even recommends proceeding without using the term to refer to the object, but using language about the object and it’s properties and relations to its real world activities, functions, roles and processes for as long as possible.

In the former, the paradox that terminology is both important and unimportant is first introduced. Despite best intentions, assuming that well defined terms mean well defined concepts and objects ignores that fact that within all but the simplest closed systems – in any real complex system – there are many sub-contexts of sub-systems and multi-discipline divisions of real world knowledge and understanding. Levenchuk says:

The meanings of terms (and any other words, even if they are not called “terms”) are determined statistically, not precisely—and this is done by using them in different contexts. Guesses about the meaning of terms are constructed by studying extended texts describing different situations, by studying different relations of the concepts denoted by these words with other concepts denoted by other words used side by side. When determining the meaning of terms, we do not read definitions, but we examine diagrams, texts, and sets of expanded statements containing the term of interest.

Here he highlights relations, particularly at the level of thought, something that could in fact apply to an ontology of what exists in the world at a fundamental level, being defined in terms of relations, but here we are being more practical. When creating formal dictionaries – say in class libraries for systems integration – it is common to focus on relations to neighbouring types. This is partly for efficiency (it’s always easier  – necessary  – to build on concepts that already appear to have understood working definitions), and partly because avoidance of ambiguity demands that definitions at least distinguish one item from another with which it might be confused. As a result, formal library definitions often take on the very repetitive form of “a B is an A where X applies” however we mustn’t overlook the paradox that despite appearances such formal definitions can never be as precise as we might hope to achieve in a simple closed system.

When discussing definitions more generally, beyond this systems thinking context, where identity may be based on definitions, I often cite “good fences (make good neighbours)” (after Robert Frost) or “think before opening and always close the gate in a fence in the forest” (after G. K. Chesterton). It’s an adage I learned from Magne Valen-Sendstad – the most experienced creator of library definitions I ever worked with. Essentially, bearing in mind the paradoxes that Levenchuk describes above, a good – formal logical – definition is always worth documenting even if it inevitably turns out to be inadequate later in the real world. Boundary disputes are easier to resolve if both neighbours know where they stand and the boundary is a fence rather than a fortress battlement. And also, if you bump up against an existing boundary – a definition – you don’t know much about and it stretches off out of sight into the forest of real world complexity, assume the principal of charity, that whoever put it there had good reason when they did.

On a lighter note, Oxford physicist David Deutsch tweeted agreement this very day:

Or in our context here:

“The trouble with definitions is that although they can be practically useful, the one thing which they cannot do, is definitively define a thing”

Contingent Conclusion

As ever, anything said is contingent on the future. I have so far read less than 10% of “Systems Thinking 2020“, but I can say it has very valuable content, recommended for anyone wanting to understand why Systems Thinking is important and why it is distinct from Systems Engineering or Systems Project Engineering.

As the author acknowledges, and as confirmed here, the English translation will benefit from native-speaker editing, but is nevertheless accessible.

Gratifying to this reviewer, to find so much real shared experience reflected in an obviously valuable textbook. Recommended, even on this limited review.

(I will read to completion and may extract a list of references and sources?)

=====

Post Notes:

In a messaging exchange with the author we discover more common ground not included in the current publication, but which is to be part of his forthcoming book.

The idea of “boundaries” which emerge between distinct things in an evolving, self-organising world – using fundamental entropy<>information models – are “Markov Blankets”. My most recent reference on this is Mark Solms in a consciousness context and, although these theories have developed over several decades in Information Science / Theory domain, Solms’ immediate source is Karl Friston. And, Friston is someone with whom Levenchuk and Metelkin already have a working relationship. Small world.

A good deal of my own research – which takes information theories as more fundamental than physical science – is primarily about human knowledge & decision-making (epistemology & cybernetics) in the complex politics of science & psychology, living in the real world.

In his forthcoming work Levenchuk intends to use a more biological / evolutionary paradigm, although he still intends to follow “the pragmatic turn” – whilst I still pursue a metaphysical bent 😉

Levenchuk’s sources include:

(*) See Follow-up post on review the “Emperor’s New Clothes” paper.

HOLD: One key thing for my areas of interest about the self-organising “individuals” topic and defining their boundaries in words – is that such things can be defined by “categorical” (good / bad / subjective) classifications – whereas most people expect “objective / logical” clarity in definitions – hence important AND not-important.

=====