Strawson on Panpsychism, Consciousness & Free-Will

Some great stuff, pithy quotes from the classic commentators, including both positive and negative views on Dennett. Strawson has grown on me since my starting from a position of apparent misunderstanding. (A man after my own in synthesising the best bits of any number of sources without feeling the need to claim originality in the form of “look, my answer’s the best one”.)

The ideas that panpsychism suffers from “the combination problem” or fails to solve “the hard problem” are just misunderstandings about what consciousness is fundamentally, in exactly the same way that physicalism says precisely nothing about what physical stuff is intrinsically. These are not the problems.

Must go back and grab the definitional quotes.
(And in fact, there is a transcript too.)

Hat tip @2philosophical_ on Twitter.

Solms and Harari on the Future of Humanity

Made no secret of the fact that I was never very impressed with Yuval Harari’s take on consciousness – what it is and how it functions – so was a little sceptical linking to this discussion facilitated by Indonesian blogger / you-tuber Gita Wirjawan. In terms of biological brains and minds, my trajectory has been Dennett, McGilchrist and Solms, and here Wirjawan has Harari in conversation with Solms – advertised as being on the “Dawn of Future Consciousness”.

Watched about 2/3 in real time and have the recorded link too.

It is VERY GOOD … and far reaching – humanity, society and governance – not just individual brains and minds.

Strong emphasis on the simplicity of affect / feeling as the root of our knowledge in the world and the need for individual attention to that. More than science.  The place of traditional narrative. The distortions of social media killing true democratic governance as conversation. (So far just my real time tweets, more later?)

Open-Endedness in Applied Systems Thinking

The EEM Institute – “Institute for Augmented Intelligence in Entrepreneurship, Engineering and Management” is the evolved public face of the (Russian) Systems Management School, where Systems Thinking is now one of the courses under the EEMI.

“Open-Endedness” is an aspect of the wider curriculum at EEMI and below are the slides and a recording of the presentation on that topic by Anatoly Levenchuk their “Head of Science”.

(I consider Anatoly a friend after some years of working together for Russian infrastructure projects on generic systems information modelling aspects, where Anatoly also invited me to speak at a couple of INCOSE events in Moscow and Nizhny-Novgorod on those knowledge modelling standards aspects. Anatoly was chair of the INCOSE Russian chapter at the time when I was working with him and his colleague Victor Agroskin in their “TechInvestLab” information systems consultancy.)

Previously here I have reviewed his book “Systems Thinking 2020”

Below I have some comments on his latest presentation (which unfortunately I missed in real time).

Slides – “Open Endedness Curriculum at EEMI”

The presentation itself:

(PLEASE NOTE: The following review is preliminary based on early sub-titled version of the presentation which has since been updated above. I will update the review in the line with the improved captions and in the light of clarifying dialogue with Ivan and Anatoly.)

All OK, up to the point he talks about the “singularity”. I’m more sceptical that the kind of AI that could ever overtake humans exists at all (yet) – but I’m absolutely on-board with the idea that as “Cyborg” AI-Augmented Humans we are already behind our own curve in dealing mentally / cognitively / organisationally with the systems complexity of human reality on the planet we call home. Very pleased to see “Engineering” defined as the human endeavour intended to “improve” that. It’s how I’ve always seen the point of my being an engineer.

Interesting to see Tononi’s IIT referenced, much referenced here. Also, Graziani’s “Standard Model of Consciousness”? Not sure I understand what that is, but I have a strong position on the reality of consciousness anyway.

Love the idea that the customers of EEMI training are seen as these Augmented Intelligence Cyborgs – both the individual humans and the organisations. Quite right. Education and training as part of “our” evolution. (Not introduced the topic in the title yet ~17:20 Open-Endedness?)

Like the Intelligence Stack (some errors in the subtitle captions).
Not sure why the distinction of human vs agent activity in the applied stack – just more generic? Deontic modality? No applied engineering at the human level – not sure I understand. Isn’t that what Systems Thinking is? (For any thinking agent, including humans?) Ah, this is at the level of the whole of humankind, not individuals and organisations OK. The Applied-Level Stack makes sense too.

(I completely agree with this information systems levels view of the entire stack of evolutionary levels.)

Bayesian decision theory based. Good (but errors in captions again). Recognition that this is bleeding-edge unproven approach for “elite” learning “not the whole peleton”.

Bertalanffy, Checkland systems theory basis – not simple “lifecycle” view – systems evolve from systems. Ah, hence the “3rd Generation” claim in terms of what people understand by cybernetics and systems. Not such a concern for my “Deflationary” approach – I just see this a focussing on the fundamentals of the original first-generation theories.

Constructor Theory (Deutsch and Marletto) lifecycles are in the intelligent agent level – agreed – hence dropping use of “lifecycle” language for the “physical” systems. Again, this is just linguistic baggage to me, so less concerned. Understood. (As I posted on INCOSE LinkedIn “Systems Engineering is first and foremost about humans”)

The evolution of systems is continuous. Systems engineering / thinking is top of the stack. 20 key “State of the Art” references all VERY recent.

Memome ->Phenome distinction. I like it.
Constructors construct constructors. Multi-level evolution. Smart evolution (is still evolution, but more than extended modern Darwin synthesis.) Systematic is not systemic. Agreed. Affordances view. Decomposition by attention. Functional systems view. Disruption comes from below.

Doxatic (?) modality, not (strictly) requirements, but more like “hypotheses” or “best guesses”- hypothetical / proposed “affordances”. Functional definition. (Already made this implicit translation when using orthodox “requirements” language.) Entrepreneurship?

Still never mentioned “Open Endedness”?

A lot in there, (lots of common references) – not all clear / agreed, but fascinating. Needs dialogue.

I have a different tactic, I understand the need for changing the words used, but would be nervous about an approach that relies on understanding a radical new technical language – much prefer to evolve usage of recognisable language, even words that have old baggage. Applying Systems Thinking to the body of systems thinking? But I think this is tactical rather than fundamental, once the processes are understood the language will also evolve.

IDEF0 Diagramming Tools

I aired my interest in IDEF0 as a diagramming language in May and August this year. First time in passing as one of the needs slowing my own writing progress, second time as a kind of “spec” for what I might expect from a community interested in systems thinking (the Active Inference group killing two birds with one stone?)

Nothing’s changed, and I’d kinda given up on better tools, resigned to simply “drafting” the diagrams in standard drawing tools. It’s an old language (late-70’s / early-80’s) in the earliest days of “CAD” and it was back in the mid-80’s I first discovered I was a fan. A drawing standard – symbology and layout conventions – for representing functional “systems”. What’s not to like? But I’ve found very few people who use or recognise its value in my 45 year career.

Today I found another present-day fan – Aaron Gillespie with his “AaronGillie” blog. Now even Powerpoint and Visio have pallettes of boxes and arrows with sticky connectors, so “drafting” has never been the problem if you understand the language and the system you’re representing. Indeed drafting is part of the thinking tools to improve the understanding and representation.

AaronGillie has a simple summary – and suggests one of its strengths is its simplicity, so advises against creating additional symbology and semantics. It’s a fair policy when it comes to standardisation but, in this case, I rarely see people actually using it in practice and indeed later standards (eg Archimate) claim greater power and interest. So I am keen to extend it for my own needs.

For purely practical drafting and sharing / communicating purposes one of the conventions is to limit system decompositions to 2 levels only, with the occasional 3rd level, and 3 to 5, max 6 modules per level. It’s about how much a human can get their head around in one diagram, one large “page”.

IC(C)OM – Inputs, Controls, (Calls), Outputs and Mechanisms – are really just different classes of Inputs and Outputs, in terms of what they do and in terms of how their state and/or identity change during the process(es). So, I’d prefer an I/O taxonomy that reflected that even more generically. Why? Because I want to represent “the whole word” in such a process model.

Which leads to my second problem. I can’t have any arbitrary limits to breadth and levels of decomposition. All taxonomy is binary at root – this / not-this – repeated as many times as we need. So, what I need to exploit are decomposition rules so that any level of abstraction / compression of any part of any such model can be presented or not by selection in the user interface. I want “one model” but I don’t ever want to see “the whole on one page” except at a one-page level of abstraction.

AaronGillie has a wonderful, manually drawn, worked example on his IDEF0 page – which is scarily (meta-)close to my whole project – (Maslovian) “Actualisation” of a human life. The aim of this human life is a process model of the whole world. I need both drafting tools and presentation tools (or one tool with edit and present modes, how hard can it be) that exploit the decomposition rules horizontally and vertically.

I’m more convinced than ever that the “IDEF0 Style” will do the job for me, even if I have to tweak the previously standardised conventions. Anyone who could programme through API’s to PowerPoint, Visio, Sparx-EA, MagicDraw, Modelio, Archi … more?

Another present-day fan here on LinkedIn says:

“There is much formalism in the IDEF0 spec, which is great as it can therefore be used in a formal way when needed. I tend to use it in a slightly looser way, keeping to the core principles but not worrying too much about the decomposition rules and process numbering”

That’s my essence too. I just want a tool that can implement more generic rules, rather than the standard set of conventions.

Systems Connections

In a break from writing, one or two pieces in need of reading and listening caught my attention. It’s all related.

In an ISSS context there has been some debate between two schools, between many different aspects of systems definition, complex detail for systems of different types in different real-world applications versus abstraction of systems aspects common to systems thinking generally. Obviously, I’m for the more generic conception – being clear what we mean when we call something a “system” and what we mean by “systems thinking”. The angels are in the abstractions, even though the devil will always be found in the details of many different types of system of different types of things. Gary Smith “collecting” together the many different historical attempts at “defining” systems, and Bruce McNaughton’s response seeking the generic systems fundamentals. (Membership discussion board links)

Also from an ISSS source – Dennis Finlayson – a summary of Mo Constandi’s book, an article with the click-baity title “Is the body key to understanding consciousness?” – No idea what the original ISSS context was, but obviously embodiment is fundamental to evolutionary views of consciousness and our minds. Most successfully recently by Mark Solms and Iain McGilchrist. The piece has lots of the surgical reports on experiencing phantom limbs and other propriocentric effects – the stuff I got originally from the likes of Damasio and Sacks (et al) – which force you to think where conscious experience really lies. Damasio famously rejected any algorithmic, computational, systems view until he eventually came on board.

And talking of Mark Solms – two things came up via social-media. One an Aeon paper – shared by Dan Dennett no less – on “blind-sight” that makes the distinction between the “qualia” – the affective impression – of seeing, and any actual objectively detailed “image” of the seen. When I’d made the Solms connection he referenced a JCS paper by Dennett that includes a sympathetic review of Solms book. Love it when a plan comes together. “Who’d have thought” – The convergence is upon us.

And just to round things off a conversation between Mark Solms and Michael Levin on “The Meaning Code” YouTube channel. Not the greatest premise for dialogue, but some interesting content. (Accuracy comes at the cost of complexity. When it comes to systems, less is more. “Deflationary” Friston. Consciousness as “raw feeling”.)

Michael Levin is relatively new to me – a colleague / collaborator / protégé of Dennett at Tufts – here a joint paper “Cognition All the Way Down” from 2020. The intentional stance at all levels / scales. Here Levin’s introductory video – “What Matters to Me and Why?” Love this selected quote on his Tufts page:

“Computer science is no more about computers
than astronomy is about telescopes.”
—Michael R. Fellows

Absolutely. Reminds me of the register-assembly computation exercise – it’s a fundamental process – fundamental to physics, life and consciousness.