Definition as a Coffin?
“Hold your definition” is a plea by philosopher Daniel Dennett, often cited here on Psybertron, when dealing patiently with his scientific friends. Any discourse that starts with apparently clear definitions, manipulated solely by logic, is inherently limited by the fit between the history of those definitions and future of reality. At best, definitions are tentative outcomes from any discourse of any complexity.
My mind was caught this week by the idea of definition as a coffin for what Anatoly Levenchuk calls “dead-think” in his book which forms the basis of this Systems Management School course on “Systems Thinking 2020”.
[Aside – an important post of mine from 2015 discusses the temporary / contextual / contingent nature of objective identity-based definitions, anywhere from physics to politics.]
More on that later, but first, how did we get here?
The Circle from Cybernetics to Systems Thinking
The “Cyber” root has been behind the Psybertron project since I started it 22 years ago, with the rhyming “Psy” prefix emphasising the psychological over physical perspective, and the “tron” alluding to the increasing electronic automation context of our 21st century journey into “What, Why and How do we Know?”. A project triggered by the increasingly despairing sense that what is “known” has a much more significant psychological aspect than the received wisdom of the objective “STEM” sciences had us believe in the previous 20th century. That, and the sense of the inevitable, that algorithmically automating this stuff – without first addressing this problem – could only make it worse.
The 21st century experience of free, ubiquitous, electronic communications certainly bears-out those fears, but little did I know. Garbage in, more extreme garbage out, as they say, even in machine-learning / AI?
I’ve recapped the place of Cybernetics and Systems Engineering / Thinking in the project several times over the years. It was July 2002 I first made the Cybernetics connection explicit and noticed that lo-and-behold the original intention of those that invented it – the 1946 Macy conference with Wiener and von-Neumann – was that it concerned human decision-making and human systems of governance from the start. I was taking this human psychology angle for granted (above) in my own philosophical researches. It was January 2012 before I was prompted to go back and read Wiener’s original 1948 Cybernetics. And even later in January 2018 before I noticed that this human cybernetics had been dubbed the Second Cybernetics as long ago as 1963 since those first working with it in early systems engineering and electronic computing applications had clearly forgotten what the human originators intended by “kybernetes”, the root of governance.
Anyway, as I say, it’s not the first time I’ve recapped this story, most recently with this (March 22) reference and this (August 2021) reference in which I made the Systems Engineering to Systems Thinking connection explicit here. Having been an engineer working in systems of many kinds my whole career since the 1970’s “systems engineering” was informally central to everything anyway, implicit even as I was working the day job in the engineering of electronic information systems explicitly.
In that post I acknowledged …
Anatoly Levenchuk, the then chair of the INCOSE Russian chapter, and his colleague Victor Agroskin, still the smartest people I ever met anywhere in any context.
… as the people that first made the Systems Engineering (now Systems Thinking) explicit for me, as the topic under consideration. The English text of Anatoly’s latest book, mentioned in the introduction, is intelligently browsable on-line here, once you’ve registered for the Systems Thinking course. There is also a downloadable PDF of the December 2021 text. (Personally, anything over a few pages, I still prefer to read and review actual books, but let’s see how we get on. This is a 358 page book.)
Initial Review
As I write this I’ve only read and skimmed parts of Systems Thinking 2020, but given this and given the above, it is already recommended.
Firstly, the 358 pages are all content. Apart from the Table of Contents, there are no “end materials”; index, bibliography, references or notes to give any clues. All additional resources – and there are many – are linked within the text. (I often prefer to compare notes on these before I read any non-fiction book in full.)
Also, in my experience, idiomatic Russian is handled very badly by things like Google Translate and working with smart people like Agroskin and Levenchuk in on-line text and blogs has proven too hard except where they were doing their own real-time translation of their Russian thoughts into oral English for me. The good news is the English translation of the book is human (by Ivan Metelkin) and whilst additional native-English speaking editing will no doubt further improve the read and clarify intent, this text is entirely intelligible.
Details, Details.
I picked-up early on Levenchuk’s focus on pragmatism and practicality. One of the earliest philosophical things I wrote (2006) after more than a decade of modelling dictionaries of terminology for systems engineering purposes was a recognition that, whilst many of the problems with meaning (epistemology) involved more philosophical abstractions, that project was primarily pragmatic – for use by engineers on deliverable projects.
The principle concerns were ontology, a model of what existed, based on pragmatic interpretations of classification and set-theories, avoiding over-reaction to such anomalies as Russell’s Paradox, so that anything useful could be said about anything. That work was of course primarily pragmatic.
At first sight this looks like the age-old “perfection is the enemy of the adequate” which can endanger the delivery of any project, but in fact Levenchuk points out that this is a misunderstanding about levels of thinking that need to be recognised as distinct. In very much the same way that Systems Engineering might appear to have morphed into Systems Thinking, in reality these are distinct areas (layers) of consideration:
-
-
- Systems Project Engineering
- Systems Engineering Thinking
- Systems Thinking
Levenchuk’s style is to provide the reader / trainee / user with a “cheat sheet” – a prescriptive procedure and advice for practical use, as well as providing rationale and background on development of the methodologies and the supporting education and training resources. But it is vitally important the right cheat sheet is applied to the right task. Systems Thinking is not a substitute for engineering project execution best-practices. What it is, is a methodology for helping shape, define and prioritise aspects of a complex project, or architecting a programme or system of future activity. Deepening understanding and knowledge of such activities, quite distinct from simply “doing” them. Knowledge and understanding whose value materialise should that doing meet unexpected issues and future opportunities. (Significantly “surprise” – the sensed gap between expectations and reality – is fundamental to the “Active Inference” school of Systems Thinking – more later.)
Quite recently here, I speculated on a more sinister take on the “devil in the details”, but Levenchuk provides clarity on the distinction between:
The devil in the details and
an angel in the abstractions.
We need both in different places. The architecting requires knowledge and understanding of the abstractions and which details should be ignored and which are insignificant to that task. The execution requires practical knowledge of more of the details. (In my own post above, the last line acknowledged that when it comes to details what we’re missing are relevance and appropriateness to the matter in hand. Systems Thinking addresses this.)
This is a book about the thinking in advance of the doing. Shaping or architecting a plan for the doing, but neither the plan nor the doing per se.
[I recall many examples of working with planners and project engineers who didn’t get this and forced inappropriate detail just because they could. eg “I know from experience and documented best-practices that our plan will need to include this, this and this, so I’m not going to let you ignore them now.” – sigh!]
Complexity
Complexity is an explicit topic from the outset, in the opening sentence of the introduction:
Systems thinking helps to solve complexity in a variety of projects: it makes it possible to think one at a time about everything important, temporarily discarding the unimportant, but without losing the integrity of the situation, the interplay of these separately thought-out important moments, systems thinking manages attention in complex collective projects.
The idea discussed above, of managing attention to which details are appropriate and relevant where and when are in that first sentence – it’s intractable to think about everything, everywhere all of the time in a complex situation. It’s why, in my own work, I think architecturally. There’s a whole section On Thinking in complex situations generally, which prompted my attention on “definitions” when I first skimmed the book.
Having everything well defined is really only a feature of closed systems where the scope and complexity is relatively simple and amenable to consideration of all details being known in advance. (ie no surprises)
Real projects, real-life human endeavours on any scale are not only complex but because of that complexity they are also effectively open systems. Systems some of whose sub-systems and components will arise from considerations outside the intended scope of the endeavour.
There is a tendency to think of definitions of objects of interest / within scope of any endeavour in terms of establishing well-defined terminology, as critical or fundamental to that endeavour. In fact data dictionaries and class libraries would appear to be predicated on that presumption.
Definitions
Levenchuk has sections on Terminology in his On Thinking chapter, entitled
-
- “Words-as-Terms Are Important and Unimportant”, and
- “Definition: as a Coffin for a Dead Think”
The latter is a play on (or mistranslation from?) Russian philosopher Shchedrovitsky who said “A definition is a coffin for a dead thought”. As noted above, I’d like to think US philosopher Dan Dennett would agree. So long as there is still thinking to be done, a definition of a term referring to the concept of an object in the real world, is little more than a placeholder. In systems thinking, there is always thinking to be done. So much so that Levenchuk even recommends proceeding without using the term to refer to the object, but using language about the object and it’s properties and relations to its real world activities, functions, roles and processes for as long as possible.
In the former, the paradox that terminology is both important and unimportant is first introduced. Despite best intentions, assuming that well defined terms mean well defined concepts and objects ignores that fact that within all but the simplest closed systems – in any real complex system – there are many sub-contexts of sub-systems and multi-discipline divisions of real world knowledge and understanding. Levenchuk says:
The meanings of terms (and any other words, even if they are not called “terms”) are determined statistically, not precisely—and this is done by using them in different contexts. Guesses about the meaning of terms are constructed by studying extended texts describing different situations, by studying different relations of the concepts denoted by these words with other concepts denoted by other words used side by side. When determining the meaning of terms, we do not read definitions, but we examine diagrams, texts, and sets of expanded statements containing the term of interest.
Here he highlights relations, particularly at the level of thought, something that could in fact apply to an ontology of what exists in the world at a fundamental level, being defined in terms of relations, but here we are being more practical. When creating formal dictionaries – say in class libraries for systems integration – it is common to focus on relations to neighbouring types. This is partly for efficiency (it’s always easier – necessary – to build on concepts that already appear to have understood working definitions), and partly because avoidance of ambiguity demands that definitions at least distinguish one item from another with which it might be confused. As a result, formal library definitions often take on the very repetitive form of “a B is an A where X applies” however we mustn’t overlook the paradox that despite appearances such formal definitions can never be as precise as we might hope to achieve in a simple closed system.
When discussing definitions more generally, beyond this systems thinking context, where identity may be based on definitions, I often cite “good fences (make good neighbours)” (after Robert Frost) or “think before opening and always close the gate in a fence in the forest” (after G. K. Chesterton). It’s an adage I learned from Magne Valen-Sendstad – the most experienced creator of library definitions I ever worked with. Essentially, bearing in mind the paradoxes that Levenchuk describes above, a good – formal logical – definition is always worth documenting even if it inevitably turns out to be inadequate later in the real world. Boundary disputes are easier to resolve if both neighbours know where they stand and the boundary is a fence rather than a fortress battlement. And also, if you bump up against an existing boundary – a definition – you don’t know much about and it stretches off out of sight into the forest of real world complexity, assume the principal of charity, that whoever put it there had good reason when they did.
On a lighter note, Oxford physicist David Deutsch tweeted agreement this very day:
Or in our context here:
“The trouble with definitions is that although they can be practically useful, the one thing which they cannot do, is definitively define a thing”
Contingent Conclusion
As ever, anything said is contingent on the future. I have so far read less than 10% of “Systems Thinking 2020“, but I can say it has very valuable content, recommended for anyone wanting to understand why Systems Thinking is important and why it is distinct from Systems Engineering or Systems Project Engineering.
As the author acknowledges, and as confirmed here, the English translation will benefit from native-speaker editing, but is nevertheless accessible.
Gratifying to this reviewer, to find so much real shared experience reflected in an obviously valuable textbook. Recommended, even on this limited review.
(I will read to completion and may extract a list of references and sources?)
=====
Post Notes:
In a messaging exchange with the author we discover more common ground not included in the current publication, but which is to be part of his forthcoming book.
The idea of “boundaries” which emerge between distinct things in an evolving, self-organising world – using fundamental entropy<>information models – are “Markov Blankets”. My most recent reference on this is Mark Solms in a consciousness context and, although these theories have developed over several decades in Information Science / Theory domain, Solms’ immediate source is Karl Friston. And, Friston is someone with whom Levenchuk and Metelkin already have a working relationship. Small world.
A good deal of my own research – which takes information theories as more fundamental than physical science – is primarily about human knowledge & decision-making (epistemology & cybernetics) in the complex politics of science & psychology, living in the real world.
In his forthcoming work Levenchuk intends to use a more biological / evolutionary paradigm, although he still intends to follow “the pragmatic turn” – whilst I still pursue a metaphysical bent 😉
Levenchuk’s sources include:
(*) See Follow-up post on review the “Emperor’s New Clothes” paper.
HOLD: One key thing for my areas of interest about the self-organising “individuals” topic and defining their boundaries in words – is that such things can be defined by “categorical” (good / bad / subjective) classifications – whereas most people expect “objective / logical” clarity in definitions – hence important AND not-important.
=====
Like this:
Like Loading...