Subir Sarkar

Subir Sarkar was interviewed by Sabine Hossenfelder last month, but I didn’t capture the link then:

Interesting content in the “Einstein was right when he said he was wrong” domain when it comes to the cosmological constant. Pointing to to some radically “new” ideas being needed to fix anomalies in physics. (That’s new as in old, but ignored.) But as I tweeted at the time, it is a fine interview anyway – proper respect between scientists with different disciplines of expertise and levels of experience.

Was prompted also to read this “Heart of Darkness” by Subir Sarkar on the same topic in a magazine called Inference. More spooky convergences, as “Active Inference” is this month’s topic in Cybernetics & Systems Thinking generally.

Michael Zargham on Cybernetic Infrastructure

A quickie to capture this link:

Very impressed watching this recorded Web3 Foundation talk by Michael Zargham. He’s a name I came across from making contact with the “Active Inference Lab”. I already know Anatoly Levenchuck and Karl Friston on the AIL Advisory Board and discovered that Zargham is another board member.
(I’m intending to participate in the .edu domain of the AIL.)

The Age of Networks
and the
Rebirth of Cybernetics

Highlights:

      • Very positive non-apology for focussing on many layers of abstraction above the bits & bytes. The essence of systems thinking is knowing what details to ignore in various levels of complex systems.
      • Very familiar recap of the history of Cybernetics starting from Plato (Kybernetes) via the Macy conferences. With “systems thinking” and network architectures front and centre of response to complexity.
      • Being comfortable with circular reasoning (Hofstadter for me). “Second Order” Cybernetics, positive as well as negative feedback loops. Future consequences are causal now. (There is active predictive inference involved – hence AI-Lab.)
      • Attention cost of participation (eg in social government). The more the “infrastructure” can handle invisible processes we don’t have to worry about, the better for us. Transparency is a distraction from what really matters. Noise means we always fall back to lowest common denominators. [See Mental Switching Costs]. What we need to trust is that the design of the decentralised system knows its own limitations.

(And, great to hear someone use that quote “All models are wrong, they simply have a valuable domain of intended use.” 3 decades (!) since I heard Julian Fowler use it.)

Anyway, a new “hero” (with no mention of John Doyle).
Connected on Twitter.

=====

Contrast with this pm’s talk with Iain McGilchrist’s elaboration of his “Sense of the Sacred”. Tremendous audience (and Iain) prejudice against “engineering” and machine language. These two domains just don’t get how close they really are. Same as deep thinking physicists being very close to the same sense of (something) sacred. In the Solms / Friston (bio-psycho) story, the turnaround of Damasio is telling, from the same prejudice against mechanistic algorithms to understanding the human subject involvement.

=====

 

Mental Switching Costs

The feeling of the brain being actively engaged with too many thoughts, to properly address any new issue, never mind any of the existing issues, is a common feeling – for me anyway.

Once you have several mental balls in the air that are connected to some strategy to get something delivered productively, it’s impossible to pick up a new one without letting drop at least one. [The other metaphor is the plate-spinning circus act.]

In correspondence today Richard Emerson coined the expression:

Mental Switching Costs

As well as reflecting the existing thought process above, that formulation instantly suggested its relationship to the (Friston) Free Energy Principle and all those systems-thinking consequences of Markov-blankets and active inference for living and sentient organisms. It’s all about efficient and effective use of resources, and when one of those costly resources is conscious attention itself, maximising which tasks can be left to the sub-conscious.

Isn’t it great when a plan comes together?

Come the Revolution

Regular commenter AJOwens (“Staggering Implications“) posted a very astute thought below my post on John C Doyle and Zombie Science.

Whether we see problems with “current” science as a bug or a virus, or simply the current state of ever-contingent, imperfect science, the switch to a new dominant view within science is of course exactly what Kuhn was talking about in his revolutions of scientific paradigms. And they’re always revolutions because – for whatever specific reasons – the existing paradigm naturally resists change. (I’d still say the current shift is special, somewhat meta, in that it’s about science not about any particular content of science. But he makes a good point.)

As an engineer / technologist I had always focussed on the techno-economic industrial paradigms (TEP’s after Freeman & Perez, previously Kondratiev Waves) enabled by advancing science, not the revolutions of or within science itself. Doubly meta here, because the current paradigm we’re struggling to get to terms with is the Electronic Information & Communications “wave” in human culture and economies more widely. This is quite distinct from the science and technology market-place that has enabled it, and quite distinct again from the revolutionary idea that information and communications may in fact be the very foundations of any kind of science.

Understood in [Kuhnian] terms, the “bug” in science is a very old one, and its roots are epistemological. All scientific research is conducted within a paradigm, but the paradigm influences what counts as “evidence.” Phenomena contrary to the reigning theory are at first not even noticed or recognized as important “facts.” If they become more persistent obstacles to current theory, they are explained away, dismissed as anomalies, or otherwise resisted. Eventually the reigning theory becomes so riddled with inconsistencies and beset with contrary observations that its very paradigm is overturned, and a new one is adopted which can accommodate the new evidence.

I believe we are in the middle of such a paradigm shift, and the work of people like McGilchrist and Solms and Doyle are part of it.

AJOwens, comment April 11th, 2022.

(And he goes on to suggest some other current sources.)

The point – we are in the middle of a Kuhnian paradigm shift – and being revolutionary, the process will have its downsides as well as its progress.

And this particular paradigm revolution is a complex, ubiquitous, many layered on multiple meta-axes. It is – or will be when it reaches a tipping point – going to be painful on a profound and grand scale. This is not just horse-drawn canal boats being replaced by steam railways. The e-Comms enabling is running full-steam ahead of the consequences in all aspects of humanity.

“The paradigm influences what counts as evidence.”

Indeed, as I’ve said before.
And resistance is futile.

Robert Pirsig On Quality

Published this week, On Quality is a collection of writings by Robert Pirsig, prefaced and selected by his widow Wendy Pirsig, almost exactly five years after his death.

The Robert Pirsig Story

Apart from introducing us to Bob’s interest in the ubiquitous presence of Quality and to his two main writings, the books Zen and the Art of Motorcycle Maintenance (ZMM) and Lila, the preface also gives us “The Robert Pirsig Story”. Ironically, Wendy points out that most relevant parts of Bob’s early biography are to be found in the pages of ZMM, despite the “for rhetorical purposes” warning which led some readers, myself included, to research which aspects did indeed correspond with reality. I say ironically because for quite some time there was speculation, even a direct suggestion from Bob, that Wendy would one day write his biography. Here she gives us an eight page summary – including, despite its brevity, several newly public details – and lets us know that the selections in On Quality are themselves “loosely chronological”.

Previously published selections come not just from Lila and ZMM, and Bob’s paper Subjects, Objects, Data and Values but also from DiSanto & Steele’s Guidebook to ZMM and Dan Glover’s Lila’s Child. New selections come from Bob’s letters (to unattributed correspondents) and from his notes of the very few talks (*) he gave on quality.

[(*) Post note: In fact the whole of the introductory chapter “The Right Way” is a selection from the transcription of a talk he gave just a month after first publication of ZMM – now available in full here.]

On Quality

And the focus really is on quality. Whilst naturally acknowledging that his Metaphysics of Quality is elaborated within Lila, the multi-level “patterns” that form the full ontology – the model of evolved existence in the world -are not mentioned. Dynamic Quality, originally simply “quality” in ZMM, is the fundamental – radical empirical – essence of what is experienced.

‘Quality is just experience. It is the essence of experience of what is sensed. That’s all.’

‘It is not an intellectual category or any kind of thing that is independent of experience itself.’

RMP, Letter October 2, 1993

On Quality focuses on the quality monism itself and on its first division into static and dynamic, contrasted with the more orthodox subject-object split:

‘That line, “Without Dynamic Quality the organism cannot grow. Without static quality it cannot last. Both are needed,” is emerging in retrospect as the most important one in Lila.’

RMP, Letter September 4, 1993.

That statement itself pre-figures what today would be seen as fundamental to “homeostatic” models of life and consciousness in both science and philosophy, where all empirical knowledge is at root “affect” a categorically good or bad felt property before any more specific kinds of thing can develop in biology or in intellect. On Quality includes several references to Bob’s archetypal “hot stove” example of categorically good vs bad immediate experience. Elsewhere he went further and used also the classic “thermostat” example of what would be instantly recognisable as homeostasis today.

Also included in On Quality are selections from Buddhist texts where Bob saw parallels with his original quality thinking and was inspired that quality must indeed be fundamental and ancient, independent of Western scientific progress.

Still Important Today

Existing Pirsig readers – and there are millions – will welcome this sympathetic selection of “the most important” basic thoughts on quality from their source. For those readers, the notes from the few talks he gave form the bulk of the newly published material [(*) above and (**) below]. For a new reader who may have resisted the urge to dive into two rhetorical best-selling and cult road-novels from 1974 and 1991, On Quality provides a gentle introduction to their core thoughts, and may tempt you to follow-up on what all the fuss was about and why they remain important today.

=====

[For more on Pirsig from me on Psybertron, start from this summary page and follow links to my Pirsig Page short-term and the Robert Pirsig Association page (longer-term)]

[Post Note (**) one such large extract was shared on Literary Hub, by the publisher Harper Collins / Mariner Books.]

=====

The Elon Musk Effect?

Amazingly, after so many convergent threads on systems architectures and their fragility or resilience to well-placed viruses or bugs – in my agenda that “system” being the whole of science-led rational orthodoxy – I have may times used the systems thinking approach that complex systems (like politics plus media plus social-media) need moderation in the speed of communications in key layers. (As opposed to yes/no censorship of content.) We need this kind of thinking in order not to degenerate to lowest common viral denominators through social-media. (Sadly the prevailing “virus” is that any kind of moderation is a constraint on the much fetishised idol of “freedom of speech”.)

The threat / promise that Elon Musk likes Twitter so much he might buy it, in order to support that fetish, has got a lot of people thinking, and sure enough systems architecture thinking has gained a little traction:

Image

Naturally, I enthusiastically agreed with both.

=====

Freedom runs on rails.
There are rules of engagement.

More on Fundamental Information & Computation

An information-and-computation-based metaphysics is fundamental to several recent sources (as well as my own metaphysics) – even if the specialist scientific researchers often don’t concern themselves with metaphysical aspects of ontology. Most recently John C Doyle and Mark Solms for example – information and its processes are simply taken as more fundamental than any other aspect of physics. The self-organisation and emergence of multi-layer systems and their architectures, Markov-blankets, active-inference, systems-thinking and the like then explain how more recognisable physical (and living and conscious) things arise and behave.

Sadly there’s a “bug” in orthodox science (Doyle) that rejects these descriptions where objective chains of causality appear to be broken by abstract patterns of information and which naturally bring the subjective aspects of consciousness into consideration. Empathising with the subjective is a “Rubicon” orthodox science needs to cross (Solms).

So.

“Minimal physicalism as a scale-free substrate for cognition and consciousness.”

– Chris Fields, James F. Glazebrook and Michael Levin

Is a paper from an August 2021 special edition of the journal “Neuroscience of Consciousness” – citing several authors already relevant here – and referenced earlier by Anatoly Levenchuk as being relevant to systems thinking and an information-and-computation-based metaphysics. I’ll say, (MP = Minimal Physicalism) it concludes:

In direct contrast with strict Cartesianism, MP holds that we can better understand our own awareness by understanding the awareness of our more basal cousins. Our homeostatic/allostatic drives and the mechanisms that satisfy them are phylogenetically continuous with those of prokaryotic unicells …

… The tradeoffs that we implement, and adjust in real time, between perception, memory, and planning are tradeoffs that have been explored and adjusted in niche-specific ways by all organisms throughout evolutionary history. We can take advantage of these fundamental mechanistic similarities to design theoretical and experimental paradigms that reveal and assess scale-free properties of consciousness in both natural and engineered systems.

Note:

“better understand our own awareness by understanding the awareness of” more primitive organisms … “both natural and engineered systems”.

No real review, just some extracted highlights which I can link to previous work here:

MP is Minimal Physicalism – That is, there are no physical assumptions beyond quantum information theory. (Not sure why “quantum information” specifically – but certainly information theory/ies more fundamental than anything else in physics.)

All physical interaction IS information exchange. (Agreed)

There is no Hard Problem. That is HP is not a problem to be solved, rather a set of inhibitions to be overcome. (Absolutely! It’s only orthodox science’s denial of subjectivity that gets in the way of explanatory understanding – see Solms above.)

There is no Combination Problem of psychist / subjective elements. (Ditto. Never was!)

“Bow-tie” systems topology. (Interesting. Something I’ve used in real world systems engineering before, and which Doyle’s work  often uses. Maximum diversity in higher and lower layers, minimally diverse, exploitable bottlenecks, in middle layers. Everything comes in threes, even individual layers.)

Markov Blankets (MB) – both Pearl and Friston forms covered. Also Tononi. (but no Dennett or Doyle). Also Boltzmann (1995 !*) Hacker, Heilighen, Damasio and Csikszentmihalyi (flow!) (* 1844 – 1906)

Not just Homeostasis (steady state) but Allostasis predictive of future demands.

Many testable “predictions” (the point of this paper?) … including
– use of “Quantum Zeno Effect” (Henry Stapp is also a joint reference)
– interoception and “the self“. (More Solms).

Prediction 15: The “self” comprises three core monitoring functions, for free-energy availability, physiological status, and organismal integrity, and three core response functions, free-energy acquisition, physiological damage control, and defense against parasites and other invaders. These will be found in every organism. Indeed they are found even in E. coli, which has inducible metabolite acquisition and digestion systems (Jacob and Monod 1961), the generalized “heat shock” stress response system (Burdon 1986), and restriction enzymes that detect and destroy foreign, e.g. viral DNA (Horiuchi and Zinder 1972). All of these responses act to restore an overall homeostatic setpoint, i.e. an expected nonequilibrium state; hence they can all be viewed as acting to minimize environmental variational free energy or Bayesian expectation violation. (Friston 2010; 2013).

Templeton funded – Christian religious funding an interesting feature common to much research that questions fundamentals of science itself.

Zience and John C Doyle

Further to the previous post, let’s try and elaborate some specifics of what John C Doyle has to say. What is clear, after the throwaway “scientists will hate this” remarks, is that this is the reason he is unpublished in more popular journals and publishing formats. Because he is pointing out “a problem with science” he is experiencing resistance to getting published.

[It’s interesting in Gazzaniga, where I first came across Doyle, the most interesting read was the more autobiographical “Tales from Both Sides” which I originally understood to be a reference to the two sides of our bicameral-mind / divided-brain, but which was in fact a reference to the politics between researchers with unpopular findings and the story of whose work got published with which content, and Sperry who eventually won the Nobel Prize. I’m not, never have been, a conspiracy theorist. The institutional defence mechanism is a bug in scientific thinking, not some nefarious active conspiracy of secret interests. Essentially the bug is ignorance of the multi-layered architecture of “systems thinking” which is artificially flattened into one-dimensional logical objective “rationale”.]

It is a seriously degenerate problem, which is winning, because it’s self-reinforcing and we humans are poorly evolved to resist it. The bug is like a virus exploiting rational human weakness. Multiple timescales are part of the problem too – from speed of light global comms, to the pace of biological evolution, and the enormous range of calendar-based individual and collective human activities in between.

This is a disaster bigger than Anthropogenic Climate Change, not least because Zombie science, Zombie law and Zombie politics compromise our chances of successfully addressing it.

In this October 2021 presentation, Doyle gives us his take:

Ironically, Dan Dennett was one of those who used real virus-driven behaviour to illustrate issues the “Four Horsemen” had with religion in the early 21st C religion vs science wars. The classic example being a parasitic fungus that behaves virally, its spores infecting the primitive nervous system of a particular species of wasp, so that it not only effectively kills the insect, but changes its behaviour to ensure it is eaten by large mammalian hosts when it does die – a massive resource to multiply the virus numbers and spread them through the host population. A neat viral trick. (See Cordyceps if that’s not already familiar). Doyle uses exactly this example, and more classic variations – like the virus infecting mice which reduces their fear of predators like cats. Same propagation trick.

Zombies were a popular meme in philosophy – a thought experiment – about whether organisms’ (like humans’) behaviour reflected internal knowledge of what they were doing and why. How would we know if they had any internal sense of self? These virally compromised insects and mice also became known as Zombies for their so obviously self-disinterested behaviours.

Doyle’s contribution is Zombie Science or “Zience”
(Rough paraphrase 15 mins from ~28:30 to ~43.00 in this presentation.)

Vaccines – in the biological and social sense – are an example of a “Diversity Enabled Sweet Spot” in the enormous stack of human systems. As we have seen with Covid, the medical science is only one small part of the stack from Policy setting and enforcement, the medical processes and procedures, virus mutations, to the levels of individual and social psychology and behaviours. Many layered and massively complex, massively distributed asynchronously around the globe.

But that’s just the warm-up. Here’s the big thing.

Things are going wrong. And things are going to get worse. And almost everything we are doing with IT/Comms networks – like “Digital Transformation” – are actively making our problems worse.

We really need to understand fundamentally what is happening, not just anecdotally, individual examples.

Viruses exploit the universality of operating system architectures. And viruses rule – they kill HALF of their hosts everyday (most of those hosts are bacteria and other single-celled creatures).

As well as Viruses, we also have more active predators in our systems – Malware. Social Media is itself the most important Malware.

The awful thing about our most recent viral experience across all these levels is that it reinforces existing inequalities (race, wealth etc.)

Language itself is hijackable – it’s an important part of our operating system – we have many issues around the globe where exactly that is happening. Zombie memes. Contagious misinformation – false, unhealthy and dangerous. Previously ‘solved’ human rights and freedoms problems are coming back as well as new ones.

And science is not immune. Zombie Science.
It’s own self-correcting processes are not protection against the problem. Science will in fact reject all these multi-layer / diversity arguments. We are losing this battle. Good science is NOT winning the war against “Zience”.

Legal systems too. Laws and enforcement.
Zombie Law too. Unintended consequences. Zombie corporations, Zombie capitalism … endless.

It’s the architecture, NOT the individual viruses or humans.

I feel I’m fighting in the same trenches as John Doyle.

(Also note the significance of “Diversity” in the “Systems Level Synthesis”.
“Vive la Difference” as I so often say. Our systems will always have layers to be practically functional – fast and accurate enough – a single layer system can never work. But such a system will always have a “diversity enabled sweet spot” and many layers will be virtualised relative to the explicit layer in which “we” operate. These are vulnerable to viral attack, and we need to ensure we don’t lose sight of what matters in each layer so we can protect & manage them, not allow them to become Zombies.)

We need systems thinking – about the right things in the right layers in the architecture – not about all the “objects” (individuals) in the system and their direct logical / causal relations in the explicit layer. We need to consider and protect against viral fragility in the virtualised layers.

As in the preamble note above, the “bug” in science – and the reason Zombie Science is not helping us solve this problem – is that it rejects independent causality in multiple layers – flattens everything into one layer of explicit objects.

Good science
is NOT winning the war
against “Zience”.

Following the science (*) can be dangerous.

=====

Post Notes:

(*) What follows is an English translation of a post by Anatoly Levenchuk, which kindly refers – just a couple of days later – to my three recent posts here as his introduction to the importance of Doyle, even if Anatoly remains sceptical about the literal virus / bug in real science – how valid is the analogy, etc? Sure, I’m finding this problem more generally when pointing out “limitations to science”. My focus is more metaphysical, and practical science sure has lots of work-arounds that maintain sanity in the face of dubious or potentially misleading anomalies. (Interestingly Anatoly also invokes Deutsch & Marletto in support. Also for me, several warnings on why “Cybernetics” failed as the umbrella term for “Systems Thinking”, synonymous for my purposes.)

John Doyle: System Level Synthesis and Rabid Zombies

by Anatoly Levenchuk 12 April 2022.
What can be found in systems thinking
In my current understanding, systems thinking is absent from the intelligence stack as a separate discipline. In fact, it is a certain set of ideas that are themselves understood in different disciplines and developed in different schools of thought:
— architectural consideration as a result of functional analysis and modular synthesis, while we also add allocation/location and splitting the total cost of ownership, plus WBS for the work of the creation system. These are all different ways of splitting into part and whole. Features of modular synthesis: interface stability, general architectural principles for engineering, such as modularity and the cost of communications as a source of its occurrence — this is already given in the textbook of systems thinking. But also the theory of system level synthesis: the need for feedback within controllers, multi-scale/diversity for high-speed systems on slow equipment — this is John Doyle.
— minimal physicalism within the framework of panpsychism, considered along the lines of development of ITT and quantum-information scale-free physics (Fields and Glazebrook, and each word here needs a long deciphering). Along this line, there is a consideration of Markov/Friston veils (Pearl and Friston), ergodicity and renormalization, “natural boundaries of systems as attractors in the state space that can withstand changes” (Fields, and since everything is nested, the “systemicity” of parts-wholes as multi-level “systems in the environment” is quite applicable). Constructor theory (Deutsch, Marletto) here.
— the principle of free energy, as defining multi-level optimization/learning of the entire universe, including multi-scale time (evolution, creation and adaptation/learning/tuning of the organism-phenotype, living organism). Frustrations between levels as setting multi-level optimization (Vanchurin, Katsnelson), the need for different scales of both systems and time. The problem of complex life cycles and parasites/hackers (Kunin, Levin, Doyle), including the problem of creationism where it is not needed (evolution) and the absence of creationism where it is needed (trial-and-error engineering instead of designing based on at least SLS) — Doyle. And also memetics (including parasitic memes like zombies and rabies, changing the behavior of the host) with an exit to high (social) evolutionary levels, here Deutsch and Doyle.
— Pragmatism along the lines of physicalist semantics through measurements and observers here. The problem of the “biological individual” (which of the system levels to consider the “main individual”, where is the agency of the level below the organism and the level above the organism). The principle of minimizing free energy here. The fact that “just science as an explanation of the world” is not needed, but science is needed for engineering (to change the world for the better in order to minimize that very free energy, that is, to reduce the unpleasant surprise). We always create (or modernize, brownfield) successful systems that change the world for the better, we reduce unpleasant surprises now and in the future. The goals of life, endless development here.
— transdisciplinary intelligence stacks (how to think about all levels at once and at each level universally) and engineering (what are the features of practices at the evolutionary levels of matter-creature-personality-organization- society-society-humanity). Here I am trying for now. And I am writing these notes as part of my work on exactly this point, because I need SoTA of all of this. I am writing straight from my head, and make no claim to completeness here.

All of these ideas are discussed in a variety of transdisciplines (physics, mathematics, ontology, methodology) and also in the engineering of a variety of systems (systems engineering for cyberphysics, medicine and farming for creatures, education for people, enterprise engineering for organizations, cultural construction/community building for communities, and so on — this “so on” is not so obvious from the point of view of “engineering”).

And I have written quite a lot about all of this, and given many links. But another name and another line of thought have appeared in this text, which has led to the development of systems thinking. This is John Doyle and the theory of systems level synthesis (SLS) that he and his students are developing. Usually, the synthesis of a regulator/controller/control system/controller is from the old cybernetic scheme, where the controller is separate, and the system (plant) controlled by it is separate. But SLS suggests shifting the task of controller synthesis to the synthesis of the entire system at once, together with numerous external and even internal feedbacks in the controller (they are needed there in abundance).

John Doyle and his proposals for a general theory of control in living and nonliving matter
I will not go into the nuances of terminology here: how TAU differs from TAR (if you do not know these words, then you do not need to know the difference), how a regulator differs from a controller, and a controller from a control system, and how all this is written in English. This is unimportant for our purposes, these are all different sides of the same set of problems: how to achieve our goals so that along the way we do not get blown away/torn down/eaten/broken somewhere, and even the goal does not really concern us (since it is always possible to achieve a minimum of free energy if there is no local goal). And I assume that you know about cybernetics as a general science of control in the living and nonliving, and you know why it died: because “it could not”. John Doyle essentially repeats all theses of the problem statement for cybernetics, but carefully does not pronounce this word itself — but offers those solutions that cybernetics could not formulate. Because cybernetics seemed to be almost a synonym for the systems approach, but there was no real systems (multi-level “out of the box” and understanding of why and how this multi-levelness) in cybernetics. That’s why it “couldn’t”. By the way, I never understood why they separate the control system from the system: it always seemed to me that the control function is somehow always spread out across the entire system, it is distributed and multi-level. So in the SLS (system level synthesis) theory, this is exactly what it is: both system blocks with their speed of calculations or activation or sensing, and communication delays, and a multitude of feedbacks in their complexity between all these heterogeneous blocks are modeled at once (and the trick is that feedbacks exist inside the “control system”, which is very heterogeneous in itself in terms of the characteristics of the elements included in it – and this heterogeneity is required, as well as modeling/synthesis of the entire system as a whole, and not just a separate controller and a separate installation/plant).

I already wrote about the convergence of systems engineering, software engineering and control systems engineering in 2009: https://ailev.livejournal.com/675208.html . But everything was standard there, although the main problem was already identified: “in modeling electrical networks for the purpose of studying blackouts, one cannot proceed from delays determined purely by the “physical network”: a significant number of delays are associated not with physics, but with the processing time of software and people. On the other hand, one cannot model only software and people, because then the specificity of the electrical network will be lost. Therefore, one can only model together, but this is precisely what causes the substantive problem of combining different models.”

By and large, control systems engineering is not classical systems engineering (it was precisely this example that prof. Dereck Hitchins cited in the distinction between systems engineering and engineering of [some kind of] systems — https://systems.hitchins.net/profs-blog/systems-engineering-vs.html). The task of control systems engineering was to create controllers, they inherited cybernetics: the science of control, where the central dogma was homeostasis with a system diagram of a controller, a device/plant and a feedback loop. Then everything was great, algorithms for such controllers were discussed for many years: PID was among the early ones (in 2011 I even wondered how to teach this PID Controller to small children, https://ailev.livejournal.com/971904.html ). Science developed, control algorithms developed. And everything was good in simple systems based on electronics, and bad in:
— biological systems, because the control schemes there were wild (similar to a plate of spaghetti: all these feedbacks were monstrously tangled, and there were an awful lot of them even in the simplest cases). Cybernetics could not cope with them, and therefore died due to lack of results.
— different networks, the largest distributed cyber-physical systems. There, everything immediately became confusing for some reason and also turned into a nightmare because of these endless delays and uncertainties. Although for the Internet and packet transmission based on the TCP protocol, simple hacks were found that made life acceptable: Internet network management was somehow possible if there was nothing important hanging on them in real time.

John Doyle (an outstanding personality in his own right, look at the bio on his page: http://www.cds.caltech.edu/~doyle/wiki/index.php?title=Main_Page and here is a collection of his relatively fresh materials “in bulk”, https://www.dropbox.com/sh/7bgwzqsl7ycxhie/AABQB9L2J-XmCniwgyO3N83Ba?dl=0 ) began researching the issue of “universal laws of control in living and nonliving things” with his students (yeah, the question is posed exactly like Wiener’s with his cybernetics) and came to the following conclusions:
— the key architectural issue in controllers is the speed of their operation, and without loss of accuracy. In biology, if you fail to control the body, you are eaten (literally), and if you miss, you are eaten too.
— the equipment is always either precise-flexible and slow, or crude-inflexible and high-speed. On slow “hardware” (or meat, or even molecules in biochemistry) you either don’t make it (and eventually die), or on fast “hardware” you miss (and eventually die). The key point in the equipment is slow and leaky communications, they are also expensive. At the same time, there are quite a lot of such pairs like speed-precision (flexibility-rigidity, customizability-automation, etc.).
— the solution to the “fast controller on slow elements” is 1. a multi-level controller architecture on a variety of scales, in which 2. a lot of feedback inside the controller and 3. replacing fast communication with memory, which allows making some predictions, since past states can be extrapolated, 4. the presence of an interface module for interchangeability of modules, 5. The immune system, since this interface module is just what is hacked.

Where to read about Doyle’s work on system-level synthesis
Here’s where to read in more detail:
— a review of system level synthesis, 2019, https://arxiv.org/abs/1904.01634 (this is how it developed in the depths of techies, without much access to biology). This article surveys the System Level Synthesis framework, which presents a novel perspective on constrained robust and optimal controller synthesis for linear systems. We show how SLS shifts the controller synthesis task from the design of a controller to the design of the entire closed loop system, and highlight the benefits of this approach in terms of scalability and transparency. We emphasize two particular applications of SLS, namely large-scale distributed optimal control and robust control. In the case of distributed control, we show how SLS allows for localized controllers to be computed, extending robust and optimal control methods to large-scale systems under practical and realistic assumptions. In the case of robust control, we show how SLS allows for novel design methodologies that, for the first time, quantify the degradation in performance of a robust controller due to model uncertainty — such transparency is key in allowing robust control methods to interact, in a principled way, with modern techniques from machine learning and statistical inference. Throughout, we emphasize practical and efficient computational solutions, and demonstrate our methods on easy to understand case studies.
— a multi-layered architecture with multiple feedback loops within living systems, “Internal Feedback in Biological Control:
Architectures and Examples”, October 2021, https://arxiv.org/2110.05029
— Fitts’ Law for speed-accuracy trade-off describes a diversity-enabled sweet spot in sensorimotor control, 2019, https://arxiv.org/abs/1906.00905
— an experiment with a “bike controller” (demonstrating that the theory predicts the experimental data exceptionally well) and a description of biological multi-layeredness, “Diversity-enabled sweet spots in layered architectures and speed–accuracy trade-offs in sensorimotor control”, https://www.pnas.org/doi/10.1073/pnas.1916367118

But it is interesting, of course, to listen at a frantic pace (I slowed down the video there so that it would not flicker so much and at least somehow make sense of the theses pronounced in a tongue twister) and look at the colorful pictures in the presentations (they are all about the same thing, but still differ in their accents):
— a half-hour old one (March 2018) with the main idea and pop experiments right on stage, https://www.youtube.com/watch?v=GD7x1az6U6g and there are 253 slides for this half-hour, http://www.lccc.lth.se/media/LCCC2018/WS2018-10/Slides/Doyle4Lund.pdf
— a newer half-hour (December 2019), more theory, https://www.youtube.com/watch?v=qKibTKK_yY8
— a fresh hour-long one (October 2021), revealing the theme of zombies, rabies and bad science, https://www.youtube.com/watch?v=Bf4hPlwU4ys

The main thing here is that to achieve robust high-speed and precise control, you need to use fairly complexly organized slow and precise and fast and imprecise elements in one architecture, and also have some memory — this diversity gives/enabled optimum/sweet spot, diversity-enabled sweet spot (DeSS).

Zombies, rabies and bad science (zience)
There are hosts, there are parasites. Parasites that hack/capture complex behavior are called “zombies”. Those that change behavior in an aggressive direction are called “rabies”. The parasite attack occurs by replacing something good with something bad on a clear interface. And here Doyle standardly follows the line of all “minimal physicalists – panpsychists”: since we are talking about systemicity as such, it will also work on a social scale. There, ethology rules (and surprising conclusions are made that the best results are obtained under matriarchy), and people also have memetics (and here are these very “zombies” – memes that change behavior). In any case, the conclusion is made that aggression is bad, it shortens life. And “social immunology” and “social parasitology” are needed.

Doyle says that at the memetic interface, it is necessary to distinguish working memes, zombie memes as error-causing memes, and rage memes for the social level. Science for him today is zience, because it is a “zombie science”, deeply rotten in its attitudes. Because this science does not lead to the synthesis of complex architectures with multi-level solutions based on diversified elements with multiple feedbacks in the control loop. Doyle’s thesis is that the absence of creative design where it should be (in engineering, including social engineering) is much worse than the presence of creationism instead of evolution. He is pissed off (pun intended) that instead of normal design based on the theory of system-level synthesis, due to ignoring the knowledge of how this is done, regulators:
— “become” in the course of “self-organization” or chaotic local design of individual subsystems, they appear in fact ad hoc, that is, of obviously poor quality, especially the first versions
— theories of such regulators appear in individual fields of knowledge, but they are transdisciplinary (universal)
— no attempts are made to create something like biological immune systems, to recognize evil as evil: he considers the theses of “flourishing diversity” and “evolution” to be incorrect, science for him is precisely about avoiding evolution (the “trial and error” method) in engineering, he is for rational design: creationism where it is needed (in technology and society).

Standard criticism here: “we know these techies-physicists, they always climb into places where they do not understand anything. Fomenko, Sakharov, now Deutsch has climbed into politics. And this Doyle is there too! We ignore him.” Doyle says outright that he is telling seditious things, and his formulas for regulators/controllers for biological systems of the “bicycle control” type (where there is no sociology or politics, the system level is below the level of a rational being) are published in biological journals with great difficulty, only thanks to good personal connections! What can we say about his thoughts on science, which in his opinion is zombified, and in terms of social systems, the design there does not stand up to any criticism, and there are just plenty of parasites and evolutionary errors on all interfaces, leading to vegetation instead of prosperity. No, Doyle says nothing about market regulation, but he is outraged by the ugly reaction of humanity to covid-19, he believes that they could have done a better job if they had acted on the basis of theory. And there is a theory, that same SLS! They just don’t want to listen to it, it is systemic, and not a single-level reductionist solution!

Doyle also believes that no neural networks and AI/ML in quantity will help humanity without SLS: at the level of small engineering solutions of the near future, they are quite good, but when it comes to large distributed cyber-physical systems, everything will die: and no reset button will help, simply nothing will happen during reboots, everything will stop and will not restart — due to the lack of a well-designed multi-level system with various modules of different scales that will sit on well-defined and immune-protected interfaces and (most importantly!) jointly provide stable/robust (when conditions change sharply, regulation both succeeds and does not miss) control due to complexly organized multiple feedbacks and good extensive memory inside the control system. And now everything is ad hoc, sometimes it happens by chance, “technoevolution” by trial and error sometimes poke successfully.

What to do with all this
Here are my plans for John Doyle’s work with his many students:

1. Use in the (systems) engineering course. Regardless of the conclusions about biological systems (there are definitely strong results there) and about social systems, the work on SLS is extremely important for the architectural development of cyber-physical systems. Remember that systems engineering and control systems theory are converging (plus software engineering, where I would also include engineering on universal/trainable algorithms, that is, modern AI on neural networks). So I would immediately bring this cyber-physical part to the systems engineering course, this is clearly no worse an idea than the ideas of DSM (design structure matrix), while the connections are absolutely clear and the mathematics behind them is clear.

In these works on SLS, mathematics is used as the basis of a meta-meta-model/upper ontology, so the ontological elaboration will not be too difficult (I just had a text on this topic mathematics-as-upper-ontology/meta-meta-models, https://ailev.livejournal.com/1621997.html ). Doyle believes that even high school students can master this mathematics, but this is not entirely true. Now his students, without Doyle himself, are writing papers where machine learning also sits on this “simple mathematics”, and the evening immediately becomes languid: “Data-Driven System Level Synthesis”, http://proceedings.mlr.press/v144/xue21a/xue21a.pdf — this is exactly SoTA TAU/TAP. You can see how it looks in general, for example, in these works — https://scholar.google.com/citations?hl=en&user=ZDPCh_EAAAAJ&view_op=list_works&sortby=pubdate . As Doyle says, in each subject area there are specialists who develop SLS for this subject area, and there are no problems here. The problem is that SLS is universal, and therefore transdisciplinary — it should be applied, including to systems where each level is based on its own discipline, and optimized precisely on the entire set of levels, and not on just one).

2. Understand what exactly Doyle has researched:
— along the lines of systems engineering in terms of classical systems description: the mathematics there clearly concerns functionality, all these feedback schemes are runtime schemes, but at the same time there are modular arguments — about diversification of module implementations and multi-leveling with unified interfaces (Doyle’s interface modules are “operating system/virtualization”, except that “encapsulation” is not mentioned). At the same time, Doyle talks about “expensive” and “cheap” modules (communication is expensive for him, but memory is cheap — in manufacturing? In speed?). Communication and centralization are certainly connected with layout. This is what needs to be sorted out in his works, so that his “architectures” would be system architectures instead of “intuitive architectures”. He is not a systems engineer, he is a systems engineer!
— along the biological line (creatures). Immediately questions about the free energy principle, but these are even trifles. Here I immediately have a question about enactive perception within the framework of active inference. It seems that Doyle describes how to make active inference systems, but there is a move towards: Friston says that what he found in biology would be good to use as a universal description for not very living systems (panpsychism), and Doyle says that what he found in not very living systems is perfectly suitable for living systems (mechanism). That is, these two schools of thought need to meet, let them complain to each other!
— along the personality line: here I somehow did not see notes on this topic in Doyle. But right away we can point to conversations in our system fitness about how we create a regulator inside ourselves. And right away you can drag into systemic fitness an explanation about how there is no single-level and consistent implementation in the nerves, brain, or muscles (“remember, children, you need to train these large muscles” – no, several levels always work at once: large ones quickly and roughly, small ones slowly and precisely). And if you add it to the COIN theory ( https://vk.com/wall-179019873_1435 ) as “memory of movement, retrieved by a context key, then everything comes together (remember that speed on slow hardware in biology is achieved through the active use of cheap memory). But for some reason it seems to me that SLS can be used for personality and more fully. We need sustainable development, and not just any! I want “antifragility”, but Doyle in his presentations constantly says that systems without this very SLS turn out to be fragile. So think about this topic. Self-regulation, self-development, self-liberation from fragility by introducing multi-level regulatory feedback within oneself and sufficient diversification of one’s mental structure.
— along the lines of enterprise architecture. There is nothing original here, except that slogans like small batch size can be problematized to diversified batch size. And many of Doyle’s arguments, especially about sweet spot is absolutely identical to Reinertsen’s reasoning on operations management, and even the main graph in his presentations on the U-shaped curve and optimization. Now think about it.
— along the line of community and society (science and scientists as a typical community of practice, zience from Doyle is just that) and society (there Doyle’s question is how we react to disasters – he uses the pandemic as an example, but he expects something worse and is interested in how the sustainability of self-government is going in humanity, he is worried that everything is ad hoc there, he does not believe that humanity will have enough trials and errors to survive in such a situation and wants to design something. Hence the forays into ethology and cautious interest in politics). But I would immediately be interested in issues of nonequilibrium, ergodicity, and also economics and the “market as a regulator”. Multi-levelness and diversity of the element base, multiple feedbacks in a complex architecture – this is something that needs to be thought through well. This is the most dubious place, because everyone has a bias in politics, Doyle probably has this bias too. Therefore, check, check, check. Here are the slides from the Share on the fragility of society, from January 26, 2022, https://www.dropbox.com/sh/7bgwzqsl7ycxhie/AABtpiOhWUzRZi_4vzfBRk8ha/1.IntroOverviewArchitecture/pptxWNarr?dl=0&preview=1.3.IntroFragile.pptx&subfolder_nav_tracking=1

3. Include this material first in the ODO2022, and then in the systems thinking course. Actually, this post is just a step in this direction, “thinking in writing”.

Thanks to Ian Glendinning and doubts about vaccination against viruses
I was led to Doyle’s work on SLS by Ian Glendinning, who dedicated three posts to this work on his blog:
— in the post notes to “The Emperor’s New Markov-Blankets?” from April 8, 2022, https://www.psybertron.org/archives/15856 , Ian writes there: “In my more general public dialogues – as opposed to researching technical papers – I find very few who have heard or understand Doyle’s multi-layer architectural-optimisation. Once we accept layers (Markov blankets) as REAL, then I think his view is VERY IMPORTANT to so much evolution / self-organisation of ALL systems. This is amazing convergence from the practical engineering level right back to fundamental physics – as information and computation”.
— in “Scientists Will Hate This”, April 11, 2022, https://www.psybertron.org/archives/15895 . There Ian puts Doyle on a par with Solms, McGilchrist and Dennett. And there Ian says that science is sick/virused — and this is not a “feature” (as evolutionists claim), but an engineering “bug”. Humans are particularly badly adapted to deal with viruses that work against human interests – especially memetic ones in society’s information and communication layers. Our social systems – including science – are much more fragile than our rationality admits.Unless we want to give-up on humans and declare viruses and the simpler single-celled organisms as “the winners by headcount” in the cosmic game of evolution we need to find memetic vaccines that work.
— in Zience and John C Doyle, April 11, 2022, https://www.psybertron.org/archives/15903 . There, Ian and Doyle deal with “virus-ridden science” that opposes the emergence of any new ideas, in this case the idea of ​​systemic multi-leveling. The institutional defence mechanism is a bug in scientific thinking, not some nefarious active conspiracy of secret interests. Essentially the bug is ignorance of the multi-layered architecture of “systems thinking” which is artificially flattened into one-dimensional logical objective “rationale”. And then everything moves on to a discussion of covid with its multi-layered defenses in the form of vaccines, masks and all that.

To be honest, I am tense from this whole approach to zience as a “zombified science” compared to “real science”: for me, “not letting strangers into science with their incomprehension” is the immune mechanism, “everything foreign is an enemy!” And the example with covid is more than slippery for me. All these “analogies” are very shaky, and if the moment with interfaces and interface modules is absolutely clear, then I would divide the whole story with viruses and memes into two parts: a) memetics and parasitism, and b) the immune system and how to determine the usefulness of an innovation (what is a virus and what is a medicine, what is a useful meme and what is harmful) and c) the limits of what we can solve by engineering versus what we don’t even need to try, but need to rely on evolution (the whole move to “social engineering” in terms of violence and subordination of individuals to a group and “gosplan versus market” in the distribution of scarce resources, including capital and labor, is exactly about this). So I would read these texts by Doyle and Glendinning very carefully and clearly would not agree with this “immune” part. Debugging is one thing, vaccination is quite another, creating an immune system is a third, talking about “science” and its “institutes” without translating it into “research” from R&D (pragmatism, after all! There are no explanations just like that, they are there to do something!) is a fourth. So be careful, there are dragons here.

Here I would apply simple Deutsch criteria: freedom of publication and discussion, and a good idea will survive. But Doyle immediately says: “there are not enough resources to survive, what is needed is not free evolution of memes, but robust control/stable regulation of the course of infinite development — and it should be according to SLS, reductionist solutions will not work.” And that is true, evolutionary algorithms run into the NP problem, and the same Doyle says that SLS serves precisely to combat the apparent P!=NP. You take a seemingly unsolvable problem, use SLS (with numerous feedbacks, extensive memory at many levels, multi-scale of approximately the same level as that of the same Vanchurin and his comrades: orders of magnitude between scale steps, everything is logarithmic) and get DeSS, inaccessible by other methods. On wildly slow and imprecise hardware, the human body dances perfectly and assembles mechanical watches no less perfectly. So the same should be done with a slow and imprecise society. And then Doyle is informed that there are many such “builders of socialism” and “social engineers” here, the road to hell is paved with all their intentions, and they tell him exactly what kind of hell it is: all these “regulators” consider individual people to be cells of the body, and the immune system knocks out those who are different as strangers. Is this what we want? Doyle phlegmatically answers: “Well, as a result, your people are not fragile, but all together – fragile, look how these free people of yours fight with each other!” And so argument after argument, all the moves in this discussion are written out well in advance, although Doyle still has slightly new moves, the previous moves were of a different kind.

And Kunin and Wolf (who work with Vanchurin and Katsnelson) have discussed parasites and their necessity at length here. They were also hit hard by Covid, and there the theory (mathematics) is tested by research on the variability of the Covid-19 genome (they mention this in all interviews). All levels in the SLS architecture evolve and there are also frictions between them, optimization is multi-level, and each element of the control system is a system in itself! So I would also look in this direction. Doyle here repeatedly refers to his students, who are engaged in, among other things, the regulation of the immune system. So there may be a lot of interesting things here too.

Here is a picture to attract attention, a slide from Doyle’s presentation on fragility with examples of “bad code in software” and “bad code in memetics” (and look at his interesting examples of bad memes): UPDATE: Discussion in the blog chat with https://t.me/ailev_blog_discussion/14093

 

Scientists Will Hate This

I mentioned John C Doyle as a candidate for a new real-life (living) “hero” in my research quest here in 2019 and again here in 2021. I say “new” hero because my long term hero has been Dan Dennett. Of course since then, both Iain McGilchrist and Mark Solms have taken up a good deal of attention with their own heroic contributions, but I mentioned Doyle again the other day in an exchange with Anatoly Levenchuk and “Systems Thinking”.

(Doyle is actually a reference in Gazzaniga who – like Sperry – is an important source in the cognitive science space, used directly by McGilchrist and many others, but not Solms so far as I can see. It’s how I first came across Doyle.)

I sensed, and still sense, that his work is going to prove important to how architectural systems thinking is applied to everything from fundamental physics to global human issues as well as brains / minds and IT/Comms networks. Trouble is, he admits, he’s very “disorganised” – unstructured presentations (oral and slides)  given to technical audiences and files on public shared drives. He’s prolific, but it’s all papers written with collaborators and students, no book(s) beyond his original control systems specialist field, with no obvious indexing or structure to his topics. In a sense that’s probably justified by the content of his current subject matter which demonstrates “universal” trade-off features of all multi-level systems – almost all his graphic abstractions are versions of each other.

I had already shared this presentation:
John Doyle, “Universal Laws and Architectures in Complex Networks”
March 2018 @ OFC Conference

Anatoly shared this recent one (as well as many papers, and in fact Doyle drops many paper references into his presentations, acknowledging his student contributions):
John Doyle – “Universal Laws and Architectures and Their Fragilities”
October 2021 @ C3.ai Digital Transformation Institute
(And this folder of public papers highlighting Social Science Architecture and Systemic Fragility.)

Now there is a thread of overturning scientific orthodoxy running through all the above, counter to the received wisdom of logical objective rationality of causal chains where wholes are reduced to summation of the history of their parts. In doing so, ignoring (a) ergodicity, that not just the end states of individual parts, but their network of paths through possible histories affects the whole outcomes, and (b) strong emergence, that wholes have their own properties and causes not causally determined by their parts.

At the level of political and aesthetic endeavours, no-one would bat an eyelid. The problem is, the problem exists in would-be science too and rational thinking more generally.

Dennett – warns scientific types to avoid greedy reductionism, and to suspend disbelief and hold-off on definitively objective definitions as rational arguments themselves evolve over repeated cycles.

McGilchrist – having debunked cortical & hemispherical misunderstandings of how our brains and conscious minds evolved to work, pleads for recognition of the naturally sacred beyond the reach of our orthodox (objectively verifiable) scientific model of reality.

Solms – having debunked cortical and mid-brain misunderstandings of those same brains and conscious minds and having established the basis of consciousness in subjectively felt experience and their evolved existence as distinct causal entities through fundamental information computation processes, makes a plea for objective scientists to cross the Rubicon to take-in the view from the subjective side.

Doyle – whose work arises explicitly from IT/Comms&Control computing networks, demonstrates repeatedly, with all manner of real-word examples (from talking or riding a bike to mobile apps & social media), that there are universal abstraction features of multi-layer systems network architectures that mean the virtualised wholes do more – better, different – than any of their parts. “Scientists will hate this!” he repeats in throwaway remarks to his technical audiences, recognising that strongly emergent causal identity of virtual entities is contentious for objective science, STEM and Engineering. (Slightly infuriating, unlike the better informed brain scientists above Doyle uses “cortex” as shorthand to mean that part of the human system inside our heads. The cortical fallacy.)

There is a common “problem with the received wisdom of orthodox science” running through all of this, and a lot of “systems thinking” and “information processing” common ground in where the problems arise.

It’s a “bug” in the received wisdom of “science-led” human rationality. The one that’s been driving this Psybertron project for 22 years.

We’ve barely scratched the surface with Doyle, I’ve mentioned elsewhere that in his terms this problem really is a bug. Viruses are especially adapted to hijacking vulnerable layers in multi-layer-architected complex systems, without needing to carry the overheads of more complex organisms such as ourselves and our social organisations. Humans are particularly badly adapted to deal with viruses that work against human interests – especially memetic ones in society’s information and communication layers. Our social systems – including science – are much more fragile than our rationality admits. Unless we want to give-up on humans and declare viruses and the simpler single-celled organisms as “the winners by headcount” in the cosmic game of evolution we need to find memetic vaccines that work.

(With Anatoly’s help) I need to dig further into Doyle.

Camille Paglia – Sexual Personae

Received and started reading Sexual Personae by Camille Paglia. The book that made her famous as a radical feminist but who also “identifies” as transgender and has been a critic of post-modernism’s consequences. Clearly someone of intellectual subtlety – and balls – on a topic that exemplifies our (21stC) modern polarisation predicament at a time when we desperately need careful discourse to make progress.

(Hat tip to Lila @commonclione for the recommendation).

Only just started the read, and already loving the style, so I expect I will digest the whole. Here an early sample:

Western love is a displacement of cosmic realities. It is a defense mechanism rationalizing forces ungoverned and ungovernable.

Sex cannot be understood because nature cannot be understood. Science is a method of logical analysis of nature’s operations … But science is always playing catch-up … Science cannot avert a single thunderbolt. Western science is a product of the Appollonian mind: its hope is that by naming and classifying, by the cold light of intellect, archaic night can be pushed back and defeated.

Name and person are part of the west’s quest for form. The west insists on the discrete identity of objects. To name is to know; the know is to control. I will demonstrate that the west’s greatness arises from this delusional certitude.

Our delusional certitude. Spot on.

The traditional contrast to Appollonian is Dionysian, but she uses “chthonian” instead – from the bowels of the earth. Being post-modern whilst criticising post-modernism is the trick. I call myself PoPoMo (post-post-modernist). The naming and classifying problem is my #GoodFences vs #IdentityPolitics agenda. Lots to look forward to.