Further to the previous post, let’s try and elaborate some specifics of what John C Doyle has to say. What is clear, after the throwaway “scientists will hate this” remarks, is that this is the reason he is unpublished in more popular journals and publishing formats. Because he is pointing out “a problem with science” he is experiencing resistance to getting published.
[It’s interesting in Gazzaniga, where I first came across Doyle, the most interesting read was the more autobiographical “Tales from Both Sides” which I originally understood to be a reference to the two sides of our bicameral-mind / divided-brain, but which was in fact a reference to the politics between researchers with unpopular findings and the story of whose work got published with which content, and Sperry who eventually won the Nobel Prize. I’m not, never have been, a conspiracy theorist. The institutional defence mechanism is a bug in scientific thinking, not some nefarious active conspiracy of secret interests. Essentially the bug is ignorance of the multi-layered architecture of “systems thinking” which is artificially flattened into one-dimensional logical objective “rationale”.]
It is a seriously degenerate problem, which is winning, because it’s self-reinforcing and we humans are poorly evolved to resist it. The bug is like a virus exploiting rational human weakness. Multiple timescales are part of the problem too – from speed of light global comms, to the pace of biological evolution, and the enormous range of calendar-based individual and collective human activities in between.
This is a disaster bigger than Anthropogenic Climate Change, not least because Zombie science, Zombie law and Zombie politics compromise our chances of successfully addressing it.
In this October 2021 presentation, Doyle gives us his take:
Ironically, Dan Dennett was one of those who used real virus-driven behaviour to illustrate issues the “Four Horsemen” had with religion in the early 21st C religion vs science wars. The classic example being a parasitic fungus that behaves virally, its spores infecting the primitive nervous system of a particular species of wasp, so that it not only effectively kills the insect, but changes its behaviour to ensure it is eaten by large mammalian hosts when it does die – a massive resource to multiply the virus numbers and spread them through the host population. A neat viral trick. (See Cordyceps if that’s not already familiar). Doyle uses exactly this example, and more classic variations – like the virus infecting mice which reduces their fear of predators like cats. Same propagation trick.
Zombies were a popular meme in philosophy – a thought experiment – about whether organisms’ (like humans’) behaviour reflected internal knowledge of what they were doing and why. How would we know if they had any internal sense of self? These virally compromised insects and mice also became known as Zombies for their so obviously self-disinterested behaviours.
Doyle’s contribution is Zombie Science or “Zience”
(Rough paraphrase 15 mins from ~28:30 to ~43.00 in this presentation.)
Vaccines – in the biological and social sense – are an example of a “Diversity Enabled Sweet Spot” in the enormous stack of human systems. As we have seen with Covid, the medical science is only one small part of the stack from Policy setting and enforcement, the medical processes and procedures, virus mutations, to the levels of individual and social psychology and behaviours. Many layered and massively complex, massively distributed asynchronously around the globe.
But that’s just the warm-up. Here’s the big thing.
Things are going wrong. And things are going to get worse. And almost everything we are doing with IT/Comms networks – like “Digital Transformation” – are actively making our problems worse.
We really need to understand fundamentally what is happening, not just anecdotally, individual examples.
Viruses exploit the universality of operating system architectures. And viruses rule – they kill HALF of their hosts everyday (most of those hosts are bacteria and other single-celled creatures).
As well as Viruses, we also have more active predators in our systems – Malware. Social Media is itself the most important Malware.
The awful thing about our most recent viral experience across all these levels is that it reinforces existing inequalities (race, wealth etc.)
Language itself is hijackable – it’s an important part of our operating system – we have many issues around the globe where exactly that is happening. Zombie memes. Contagious misinformation – false, unhealthy and dangerous. Previously ‘solved’ human rights and freedoms problems are coming back as well as new ones.
And science is not immune. Zombie Science.
It’s own self-correcting processes are not protection against the problem. Science will in fact reject all these multi-layer / diversity arguments. We are losing this battle. Good science is NOT winning the war against “Zience”.Legal systems too. Laws and enforcement.
Zombie Law too. Unintended consequences. Zombie corporations, Zombie capitalism … endless.It’s the architecture, NOT the individual viruses or humans.
I feel I’m fighting in the same trenches as John Doyle.
(Also note the significance of “Diversity” in the “Systems Level Synthesis”.
“Vive la Difference” as I so often say. Our systems will always have layers to be practically functional – fast and accurate enough – a single layer system can never work. But such a system will always have a “diversity enabled sweet spot” and many layers will be virtualised relative to the explicit layer in which “we” operate. These are vulnerable to viral attack, and we need to ensure we don’t lose sight of what matters in each layer so we can protect & manage them, not allow them to become Zombies.)
We need systems thinking – about the right things in the right layers in the architecture – not about all the “objects” (individuals) in the system and their direct logical / causal relations in the explicit layer. We need to consider and protect against viral fragility in the virtualised layers.
As in the preamble note above, the “bug” in science – and the reason Zombie Science is not helping us solve this problem – is that it rejects independent causality in multiple layers – flattens everything into one layer of explicit objects.
Good science
is NOT winning the war
against “Zience”.
Following the science (*) can be dangerous.
=====
Post Notes:
(*) What follows is an English translation of a post by Anatoly Levenchuk, which kindly refers – just a couple of days later – to my three recent posts here as his introduction to the importance of Doyle, even if Anatoly remains sceptical about the literal virus / bug in real science – how valid is the analogy, etc? Sure, I’m finding this problem more generally when pointing out “limitations to science”. My focus is more metaphysical, and practical science sure has lots of work-arounds that maintain sanity in the face of dubious or potentially misleading anomalies. (Interestingly Anatoly also invokes Deutsch & Marletto in support. Also for me, several warnings on why “Cybernetics” failed as the umbrella term for “Systems Thinking”, synonymous for my purposes.)
John Doyle: System Level Synthesis and Rabid Zombies
by Anatoly Levenchuk 12 April 2022. What can be found in systems thinking
In my current understanding, systems thinking is absent from the intelligence stack as a separate discipline. In fact, it is a certain set of ideas that are themselves understood in different disciplines and developed in different schools of thought:
— architectural consideration as a result of functional analysis and modular synthesis, while we also add allocation/location and splitting the total cost of ownership, plus WBS for the work of the creation system. These are all different ways of splitting into part and whole. Features of modular synthesis: interface stability, general architectural principles for engineering, such as modularity and the cost of communications as a source of its occurrence — this is already given in the textbook of systems thinking. But also the theory of system level synthesis: the need for feedback within controllers, multi-scale/diversity for high-speed systems on slow equipment — this is John Doyle.
— minimal physicalism within the framework of panpsychism, considered along the lines of development of ITT and quantum-information scale-free physics (Fields and Glazebrook, and each word here needs a long deciphering). Along this line, there is a consideration of Markov/Friston veils (Pearl and Friston), ergodicity and renormalization, “natural boundaries of systems as attractors in the state space that can withstand changes” (Fields, and since everything is nested, the “systemicity” of parts-wholes as multi-level “systems in the environment” is quite applicable). Constructor theory (Deutsch, Marletto) here.
— the principle of free energy, as defining multi-level optimization/learning of the entire universe, including multi-scale time (evolution, creation and adaptation/learning/tuning of the organism-phenotype, living organism). Frustrations between levels as setting multi-level optimization (Vanchurin, Katsnelson), the need for different scales of both systems and time. The problem of complex life cycles and parasites/hackers (Kunin, Levin, Doyle), including the problem of creationism where it is not needed (evolution) and the absence of creationism where it is needed (trial-and-error engineering instead of designing based on at least SLS) — Doyle. And also memetics (including parasitic memes like zombies and rabies, changing the behavior of the host) with an exit to high (social) evolutionary levels, here Deutsch and Doyle.
— Pragmatism along the lines of physicalist semantics through measurements and observers here. The problem of the “biological individual” (which of the system levels to consider the “main individual”, where is the agency of the level below the organism and the level above the organism). The principle of minimizing free energy here. The fact that “just science as an explanation of the world” is not needed, but science is needed for engineering (to change the world for the better in order to minimize that very free energy, that is, to reduce the unpleasant surprise). We always create (or modernize, brownfield) successful systems that change the world for the better, we reduce unpleasant surprises now and in the future. The goals of life, endless development here.
— transdisciplinary intelligence stacks (how to think about all levels at once and at each level universally) and engineering (what are the features of practices at the evolutionary levels of matter-creature-personality-organization-society-society-humanity). Here I am trying for now. And I am writing these notes as part of my work on exactly this point, because I need SoTA of all of this. I am writing straight from my head, and make no claim to completeness here. All of these ideas are discussed in a variety of transdisciplines (physics, mathematics, ontology, methodology) and also in the engineering of a variety of systems (systems engineering for cyberphysics, medicine and farming for creatures, education for people, enterprise engineering for organizations, cultural construction/community building for communities, and so on — this “so on” is not so obvious from the point of view of “engineering”).
And I have written quite a lot about all of this, and given many links. But another name and another line of thought have appeared in this text, which has led to the development of systems thinking. This is John Doyle and the theory of systems level synthesis (SLS) that he and his students are developing. Usually, the synthesis of a regulator/controller/control system/controller is from the old cybernetic scheme, where the controller is separate, and the system (plant) controlled by it is separate. But SLS suggests shifting the task of controller synthesis to the synthesis of the entire system at once, together with numerous external and even internal feedbacks in the controller (they are needed there in abundance).
John Doyle and his proposals for a general theory of control in living and nonliving matter
I will not go into the nuances of terminology here: how TAU differs from TAR (if you do not know these words, then you do not need to know the difference), how a regulator differs from a controller, and a controller from a control system, and how all this is written in English. This is unimportant for our purposes, these are all different sides of the same set of problems: how to achieve our goals so that along the way we do not get blown away/torn down/eaten/broken somewhere, and even the goal does not really concern us (since it is always possible to achieve a minimum of free energy if there is no local goal). And I assume that you know about cybernetics as a general science of control in the living and nonliving, and you know why it died: because “it could not”. John Doyle essentially repeats all theses of the problem statement for cybernetics, but carefully does not pronounce this word itself — but offers those solutions that cybernetics could not formulate. Because cybernetics seemed to be almost a synonym for the systems approach, but there was no real systems (multi-level “out of the box” and understanding of why and how this multi-levelness) in cybernetics. That’s why it “couldn’t”. By the way, I never understood why they separate the control system from the system: it always seemed to me that the control function is somehow always spread out across the entire system, it is distributed and multi-level. So in the SLS (system level synthesis) theory, this is exactly what it is: both system blocks with their speed of calculations or activation or sensing, and communication delays, and a multitude of feedbacks in their complexity between all these heterogeneous blocks are modeled at once (and the trick is that feedbacks exist inside the “control system”, which is very heterogeneous in itself in terms of the characteristics of the elements included in it – and this heterogeneity is required, as well as modeling/synthesis of the entire system as a whole, and not just a separate controller and a separate installation/plant).I already wrote about the convergence of systems engineering, software engineering and control systems engineering in 2009: https://ailev.livejournal.com/675208.html . But everything was standard there, although the main problem was already identified: “in modeling electrical networks for the purpose of studying blackouts, one cannot proceed from delays determined purely by the “physical network”: a significant number of delays are associated not with physics, but with the processing time of software and people. On the other hand, one cannot model only software and people, because then the specificity of the electrical network will be lost. Therefore, one can only model together, but this is precisely what causes the substantive problem of combining different models.”
By and large, control systems engineering is not classical systems engineering (it was precisely this example that prof. Dereck Hitchins cited in the distinction between systems engineering and engineering of [some kind of] systems — https://systems.hitchins.net/profs-blog/systems-engineering-vs.html). The task of control systems engineering was to create controllers, they inherited cybernetics: the science of control, where the central dogma was homeostasis with a system diagram of a controller, a device/plant and a feedback loop. Then everything was great, algorithms for such controllers were discussed for many years: PID was among the early ones (in 2011 I even wondered how to teach this PID Controller to small children, https://ailev.livejournal.com/971904.html ). Science developed, control algorithms developed. And everything was good in simple systems based on electronics, and bad in:
— biological systems, because the control schemes there were wild (similar to a plate of spaghetti: all these feedbacks were monstrously tangled, and there were an awful lot of them even in the simplest cases). Cybernetics could not cope with them, and therefore died due to lack of results.
— different networks, the largest distributed cyber-physical systems. There, everything immediately became confusing for some reason and also turned into a nightmare because of these endless delays and uncertainties. Although for the Internet and packet transmission based on the TCP protocol, simple hacks were found that made life acceptable: Internet network management was somehow possible if there was nothing important hanging on them in real time.John Doyle (an outstanding personality in his own right, look at the bio on his page: http://www.cds.caltech.edu/~doyle/wiki/index.php?title=Main_Page and here is a collection of his relatively fresh materials “in bulk”, https://www.dropbox.com/sh/7bgwzqsl7ycxhie/AABQB9L2J-XmCniwgyO3N83Ba?dl=0 ) began researching the issue of “universal laws of control in living and nonliving things” with his students (yeah, the question is posed exactly like Wiener’s with his cybernetics) and came to the following conclusions:
— the key architectural issue in controllers is the speed of their operation, and without loss of accuracy. In biology, if you fail to control the body, you are eaten (literally), and if you miss, you are eaten too.
— the equipment is always either precise-flexible and slow, or crude-inflexible and high-speed. On slow “hardware” (or meat, or even molecules in biochemistry) you either don’t make it (and eventually die), or on fast “hardware” you miss (and eventually die). The key point in the equipment is slow and leaky communications, they are also expensive. At the same time, there are quite a lot of such pairs like speed-precision (flexibility-rigidity, customizability-automation, etc.).
— the solution to the “fast controller on slow elements” is 1. a multi-level controller architecture on a variety of scales, in which 2. a lot of feedback inside the controller and 3. replacing fast communication with memory, which allows making some predictions, since past states can be extrapolated, 4. the presence of an interface module for interchangeability of modules, 5. The immune system, since this interface module is just what is hacked.Where to read about Doyle’s work on system-level synthesis
Here’s where to read in more detail:
— a review of system level synthesis, 2019, https://arxiv.org/abs/1904.01634 (this is how it developed in the depths of techies, without much access to biology). This article surveys the System Level Synthesis framework, which presents a novel perspective on constrained robust and optimal controller synthesis for linear systems. We show how SLS shifts the controller synthesis task from the design of a controller to the design of the entire closed loop system, and highlight the benefits of this approach in terms of scalability and transparency. We emphasize two particular applications of SLS, namely large-scale distributed optimal control and robust control. In the case of distributed control, we show how SLS allows for localized controllers to be computed, extending robust and optimal control methods to large-scale systems under practical and realistic assumptions. In the case of robust control, we show how SLS allows for novel design methodologies that, for the first time, quantify the degradation in performance of a robust controller due to model uncertainty — such transparency is key in allowing robust control methods to interact, in a principled way, with modern techniques from machine learning and statistical inference. Throughout, we emphasize practical and efficient computational solutions, and demonstrate our methods on easy to understand case studies.
— a multi-layered architecture with multiple feedback loops within living systems, “Internal Feedback in Biological Control:
Architectures and Examples”, October 2021, https://arxiv.org/2110.05029
— Fitts’ Law for speed-accuracy trade-off describes a diversity-enabled sweet spot in sensorimotor control, 2019, https://arxiv.org/abs/1906.00905
— an experiment with a “bike controller” (demonstrating that the theory predicts the experimental data exceptionally well) and a description of biological multi-layeredness, “Diversity-enabled sweet spots in layered architectures and speed–accuracy trade-offs in sensorimotor control”, https://www.pnas.org/doi/10.1073/pnas.1916367118But it is interesting, of course, to listen at a frantic pace (I slowed down the video there so that it would not flicker so much and at least somehow make sense of the theses pronounced in a tongue twister) and look at the colorful pictures in the presentations (they are all about the same thing, but still differ in their accents):
— a half-hour old one (March 2018) with the main idea and pop experiments right on stage, https://www.youtube.com/watch?v=GD7x1az6U6g and there are 253 slides for this half-hour, http://www.lccc.lth.se/media/LCCC2018/WS2018-10/Slides/Doyle4Lund.pdf
— a newer half-hour (December 2019), more theory, https://www.youtube.com/watch?v=qKibTKK_yY8
— a fresh hour-long one (October 2021), revealing the theme of zombies, rabies and bad science, https://www.youtube.com/watch?v=Bf4hPlwU4ysThe main thing here is that to achieve robust high-speed and precise control, you need to use fairly complexly organized slow and precise and fast and imprecise elements in one architecture, and also have some memory — this diversity gives/enabled optimum/sweet spot, diversity-enabled sweet spot (DeSS).
Zombies, rabies and bad science (zience)
There are hosts, there are parasites. Parasites that hack/capture complex behavior are called “zombies”. Those that change behavior in an aggressive direction are called “rabies”. The parasite attack occurs by replacing something good with something bad on a clear interface. And here Doyle standardly follows the line of all “minimal physicalists – panpsychists”: since we are talking about systemicity as such, it will also work on a social scale. There, ethology rules (and surprising conclusions are made that the best results are obtained under matriarchy), and people also have memetics (and here are these very “zombies” – memes that change behavior). In any case, the conclusion is made that aggression is bad, it shortens life. And “social immunology” and “social parasitology” are needed.Doyle says that at the memetic interface, it is necessary to distinguish working memes, zombie memes as error-causing memes, and rage memes for the social level. Science for him today is zience, because it is a “zombie science”, deeply rotten in its attitudes. Because this science does not lead to the synthesis of complex architectures with multi-level solutions based on diversified elements with multiple feedbacks in the control loop. Doyle’s thesis is that the absence of creative design where it should be (in engineering, including social engineering) is much worse than the presence of creationism instead of evolution. He is pissed off (pun intended) that instead of normal design based on the theory of system-level synthesis, due to ignoring the knowledge of how this is done, regulators:
— “become” in the course of “self-organization” or chaotic local design of individual subsystems, they appear in fact ad hoc, that is, of obviously poor quality, especially the first versions
— theories of such regulators appear in individual fields of knowledge, but they are transdisciplinary (universal)
— no attempts are made to create something like biological immune systems, to recognize evil as evil: he considers the theses of “flourishing diversity” and “evolution” to be incorrect, science for him is precisely about avoiding evolution (the “trial and error” method) in engineering, he is for rational design: creationism where it is needed (in technology and society).Standard criticism here: “we know these techies-physicists, they always climb into places where they do not understand anything. Fomenko, Sakharov, now Deutsch has climbed into politics. And this Doyle is there too! We ignore him.” Doyle says outright that he is telling seditious things, and his formulas for regulators/controllers for biological systems of the “bicycle control” type (where there is no sociology or politics, the system level is below the level of a rational being) are published in biological journals with great difficulty, only thanks to good personal connections! What can we say about his thoughts on science, which in his opinion is zombified, and in terms of social systems, the design there does not stand up to any criticism, and there are just plenty of parasites and evolutionary errors on all interfaces, leading to vegetation instead of prosperity. No, Doyle says nothing about market regulation, but he is outraged by the ugly reaction of humanity to covid-19, he believes that they could have done a better job if they had acted on the basis of theory. And there is a theory, that same SLS! They just don’t want to listen to it, it is systemic, and not a single-level reductionist solution!
Doyle also believes that no neural networks and AI/ML in quantity will help humanity without SLS: at the level of small engineering solutions of the near future, they are quite good, but when it comes to large distributed cyber-physical systems, everything will die: and no reset button will help, simply nothing will happen during reboots, everything will stop and will not restart — due to the lack of a well-designed multi-level system with various modules of different scales that will sit on well-defined and immune-protected interfaces and (most importantly!) jointly provide stable/robust (when conditions change sharply, regulation both succeeds and does not miss) control due to complexly organized multiple feedbacks and good extensive memory inside the control system. And now everything is ad hoc, sometimes it happens by chance, “technoevolution” by trial and error sometimes poke successfully.
What to do with all this
Here are my plans for John Doyle’s work with his many students:1. Use in the (systems) engineering course. Regardless of the conclusions about biological systems (there are definitely strong results there) and about social systems, the work on SLS is extremely important for the architectural development of cyber-physical systems. Remember that systems engineering and control systems theory are converging (plus software engineering, where I would also include engineering on universal/trainable algorithms, that is, modern AI on neural networks). So I would immediately bring this cyber-physical part to the systems engineering course, this is clearly no worse an idea than the ideas of DSM (design structure matrix), while the connections are absolutely clear and the mathematics behind them is clear.
In these works on SLS, mathematics is used as the basis of a meta-meta-model/upper ontology, so the ontological elaboration will not be too difficult (I just had a text on this topic mathematics-as-upper-ontology/meta-meta-models, https://ailev.livejournal.com/1621997.html ). Doyle believes that even high school students can master this mathematics, but this is not entirely true. Now his students, without Doyle himself, are writing papers where machine learning also sits on this “simple mathematics”, and the evening immediately becomes languid: “Data-Driven System Level Synthesis”, http://proceedings.mlr.press/v144/xue21a/xue21a.pdf — this is exactly SoTA TAU/TAP. You can see how it looks in general, for example, in these works — https://scholar.google.com/citations?hl=en&user=ZDPCh_EAAAAJ&view_op=list_works&sortby=pubdate . As Doyle says, in each subject area there are specialists who develop SLS for this subject area, and there are no problems here. The problem is that SLS is universal, and therefore transdisciplinary — it should be applied, including to systems where each level is based on its own discipline, and optimized precisely on the entire set of levels, and not on just one).
2. Understand what exactly Doyle has researched:
— along the lines of systems engineering in terms of classical systems description: the mathematics there clearly concerns functionality, all these feedback schemes are runtime schemes, but at the same time there are modular arguments — about diversification of module implementations and multi-leveling with unified interfaces (Doyle’s interface modules are “operating system/virtualization”, except that “encapsulation” is not mentioned). At the same time, Doyle talks about “expensive” and “cheap” modules (communication is expensive for him, but memory is cheap — in manufacturing? In speed?). Communication and centralization are certainly connected with layout. This is what needs to be sorted out in his works, so that his “architectures” would be system architectures instead of “intuitive architectures”. He is not a systems engineer, he is a systems engineer!
— along the biological line (creatures). Immediately questions about the free energy principle, but these are even trifles. Here I immediately have a question about enactive perception within the framework of active inference. It seems that Doyle describes how to make active inference systems, but there is a move towards: Friston says that what he found in biology would be good to use as a universal description for not very living systems (panpsychism), and Doyle says that what he found in not very living systems is perfectly suitable for living systems (mechanism). That is, these two schools of thought need to meet, let them complain to each other!
— along the personality line: here I somehow did not see notes on this topic in Doyle. But right away we can point to conversations in our system fitness about how we create a regulator inside ourselves. And right away you can drag into systemic fitness an explanation about how there is no single-level and consistent implementation in the nerves, brain, or muscles (“remember, children, you need to train these large muscles” – no, several levels always work at once: large ones quickly and roughly, small ones slowly and precisely). And if you add it to the COIN theory ( https://vk.com/wall-179019873_1435 ) as “memory of movement, retrieved by a context key, then everything comes together (remember that speed on slow hardware in biology is achieved through the active use of cheap memory). But for some reason it seems to me that SLS can be used for personality and more fully. We need sustainable development, and not just any! I want “antifragility”, but Doyle in his presentations constantly says that systems without this very SLS turn out to be fragile. So think about this topic. Self-regulation, self-development, self-liberation from fragility by introducing multi-level regulatory feedback within oneself and sufficient diversification of one’s mental structure.
— along the lines of enterprise architecture. There is nothing original here, except that slogans like small batch size can be problematized to diversified batch size. And many of Doyle’s arguments, especially about sweet spot is absolutely identical to Reinertsen’s reasoning on operations management, and even the main graph in his presentations on the U-shaped curve and optimization. Now think about it.
— along the line of community and society (science and scientists as a typical community of practice, zience from Doyle is just that) and society (there Doyle’s question is how we react to disasters – he uses the pandemic as an example, but he expects something worse and is interested in how the sustainability of self-government is going in humanity, he is worried that everything is ad hoc there, he does not believe that humanity will have enough trials and errors to survive in such a situation and wants to design something. Hence the forays into ethology and cautious interest in politics). But I would immediately be interested in issues of nonequilibrium, ergodicity, and also economics and the “market as a regulator”. Multi-levelness and diversity of the element base, multiple feedbacks in a complex architecture – this is something that needs to be thought through well. This is the most dubious place, because everyone has a bias in politics, Doyle probably has this bias too. Therefore, check, check, check. Here are the slides from the Share on the fragility of society, from January 26, 2022, https://www.dropbox.com/sh/7bgwzqsl7ycxhie/AABtpiOhWUzRZi_4vzfBRk8ha/1.IntroOverviewArchitecture/pptxWNarr?dl=0&preview=1.3.IntroFragile.pptx&subfolder_nav_tracking=13. Include this material first in the ODO2022, and then in the systems thinking course. Actually, this post is just a step in this direction, “thinking in writing”.
Thanks to Ian Glendinning and doubts about vaccination against viruses
I was led to Doyle’s work on SLS by Ian Glendinning, who dedicated three posts to this work on his blog:
— in the post notes to “The Emperor’s New Markov-Blankets?” from April 8, 2022, https://www.psybertron.org/archives/15856 , Ian writes there: “In my more general public dialogues – as opposed to researching technical papers – I find very few who have heard or understand Doyle’s multi-layer architectural-optimisation. Once we accept layers (Markov blankets) as REAL, then I think his view is VERY IMPORTANT to so much evolution / self-organisation of ALL systems. This is amazing convergence from the practical engineering level right back to fundamental physics – as information and computation”.
— in “Scientists Will Hate This”, April 11, 2022, https://www.psybertron.org/archives/15895 . There Ian puts Doyle on a par with Solms, McGilchrist and Dennett. And there Ian says that science is sick/virused — and this is not a “feature” (as evolutionists claim), but an engineering “bug”. Humans are particularly badly adapted to deal with viruses that work against human interests – especially memetic ones in society’s information and communication layers. Our social systems – including science – are much more fragile than our rationality admits.Unless we want to give-up on humans and declare viruses and the simpler single-celled organisms as “the winners by headcount” in the cosmic game of evolution we need to find memetic vaccines that work.
— in Zience and John C Doyle, April 11, 2022, https://www.psybertron.org/archives/15903 . There, Ian and Doyle deal with “virus-ridden science” that opposes the emergence of any new ideas, in this case the idea of systemic multi-leveling. The institutional defence mechanism is a bug in scientific thinking, not some nefarious active conspiracy of secret interests. Essentially the bug is ignorance of the multi-layered architecture of “systems thinking” which is artificially flattened into one-dimensional logical objective “rationale”. And then everything moves on to a discussion of covid with its multi-layered defenses in the form of vaccines, masks and all that.To be honest, I am tense from this whole approach to zience as a “zombified science” compared to “real science”: for me, “not letting strangers into science with their incomprehension” is the immune mechanism, “everything foreign is an enemy!” And the example with covid is more than slippery for me. All these “analogies” are very shaky, and if the moment with interfaces and interface modules is absolutely clear, then I would divide the whole story with viruses and memes into two parts: a) memetics and parasitism, and b) the immune system and how to determine the usefulness of an innovation (what is a virus and what is a medicine, what is a useful meme and what is harmful) and c) the limits of what we can solve by engineering versus what we don’t even need to try, but need to rely on evolution (the whole move to “social engineering” in terms of violence and subordination of individuals to a group and “gosplan versus market” in the distribution of scarce resources, including capital and labor, is exactly about this). So I would read these texts by Doyle and Glendinning very carefully and clearly would not agree with this “immune” part. Debugging is one thing, vaccination is quite another, creating an immune system is a third, talking about “science” and its “institutes” without translating it into “research” from R&D (pragmatism, after all! There are no explanations just like that, they are there to do something!) is a fourth. So be careful, there are dragons here.
Here I would apply simple Deutsch criteria: freedom of publication and discussion, and a good idea will survive. But Doyle immediately says: “there are not enough resources to survive, what is needed is not free evolution of memes, but robust control/stable regulation of the course of infinite development — and it should be according to SLS, reductionist solutions will not work.” And that is true, evolutionary algorithms run into the NP problem, and the same Doyle says that SLS serves precisely to combat the apparent P!=NP. You take a seemingly unsolvable problem, use SLS (with numerous feedbacks, extensive memory at many levels, multi-scale of approximately the same level as that of the same Vanchurin and his comrades: orders of magnitude between scale steps, everything is logarithmic) and get DeSS, inaccessible by other methods. On wildly slow and imprecise hardware, the human body dances perfectly and assembles mechanical watches no less perfectly. So the same should be done with a slow and imprecise society. And then Doyle is informed that there are many such “builders of socialism” and “social engineers” here, the road to hell is paved with all their intentions, and they tell him exactly what kind of hell it is: all these “regulators” consider individual people to be cells of the body, and the immune system knocks out those who are different as strangers. Is this what we want? Doyle phlegmatically answers: “Well, as a result, your people are not fragile, but all together – fragile, look how these free people of yours fight with each other!” And so argument after argument, all the moves in this discussion are written out well in advance, although Doyle still has slightly new moves, the previous moves were of a different kind.
And Kunin and Wolf (who work with Vanchurin and Katsnelson) have discussed parasites and their necessity at length here. They were also hit hard by Covid, and there the theory (mathematics) is tested by research on the variability of the Covid-19 genome (they mention this in all interviews). All levels in the SLS architecture evolve and there are also frictions between them, optimization is multi-level, and each element of the control system is a system in itself! So I would also look in this direction. Doyle here repeatedly refers to his students, who are engaged in, among other things, the regulation of the immune system. So there may be a lot of interesting things here too.
Here is a picture to attract attention, a slide from Doyle’s presentation on fragility with examples of “bad code in software” and “bad code in memetics” (and look at his interesting examples of bad memes): UPDATE: Discussion in the blog chat with https://t.me/ailev_blog_discussion/14093
You probably already know, but just in case, I’ll mention Thomas Kuhn’s work on science and paradigm shifts. Understood in his terms, the “bug” in science is a very old one, and its roots are epistemological. All scientific research is conducted within a paradigm, but the paradigm influences what counts as “evidence.” Phenomena contrary to the reigning theory are at first not even noticed or recognized as important “facts.” If they become more persistent obstacles to current theory, they are explained away, dismissed as anomalies, or otherwise resisted. Eventually the reigning theory becomes so riddled with inconsistencies and beset with contrary observations that its very paradigm is overturned, and a new one is adopted which can accommodate the new evidence.
I believe we are in the middle of such a paradigm shift, and the work of people like McGilchrist and Solms and Doyle are part of it. So is the work of Sally Weintrobe (Psychological Roots of the Climate Crisis) and Charles Eisenstein (The More Beautiful World Our Hearts Know Is Possible) and Robin Wall Kimmerer (Braiding Sweetgrass). The paradigm shift involves a more existential, personal way of knowing; or in other terms, restoring subjective experience as the heart of reality.
In many ways beneficial, this shift is also not without its problems and dangers.
I forgot to include in that list Lynn Margulis, whose groundbreaking work in biology was also resisted for decades by scientific orthodoxy, as you can read at Wikipedia.
Ah yes, Kuhn’s work I use later derivations – waves or cycles of change after Freeman & Perez and Kondratiev / Kondratieff primarily (~80 years) but screwed-up by the speed of the the current ICT cycle. (Quite a large part of what I’ve researched.)
What I don’t see though, in these generic cycles of change is any specific idea of a “bug” in science? Although as you say – any “problem” with the existing paradigm is typically denied until the weight of change creates the eventual shift. Being “revolutionary” the changes carry risks too, as you say.
Of course what I’m forgetting is that Kuhnian paradigms were specifically “scientific revolutions” – I’ve always tended to focus on the industrial / economic consequences.
Lynn Margulis I’ve also used – in fact one of my regular correspondents knew the Margulis and Sagan families.
I like “restoring subjective experience as the heart of reality” – I’ll take a look at those references new to me. Thanks AJ.