Economics – in Theory

I left my review of Paul Mason at his revelation that Karl Marx is to be our saviour.

Not a bombshell that Mason has serious left leanings. Even as a journalist he’s always worn his heart on his sleeve supporting the financially underpowered – most recently in doggedly sticking up for the Greeks against the EU, ECB and IMF.

Interesting then that the Marx he recommends is not the Marx of Das Kapital.

Marx ‘s outline draft papers, Der Grundrisse, contain his Fragment on Machines. In this he saw fundamentally that knowledge was the valuable aspect of production – the how-to of labour inputs – even as the physical aspects of work became automated, increasingly supervised by the labour. And echoing Mason’s earlier reference to universal knowledge in the info-tech wave – not so much polymathic as genuinely generic and conceptual understanding – Marx gave us The General Intellect. The ability to know how; the understanding of how things work independent of the specific economic production or technology context. Mason gives us real examples from pre and early info-tech days (telegraph operators) and the fact that knowledge involves understanding the psychology of the fellow humans in the wider system, not just the immediate technology involved.

After showing how this view fits with the labour-theory of capitalism, it brings us to an interesting question of why does a pseudoscience like economics need a theory? What’s wrong with actual documented effects, pure empirical evidence?

After a reminder of the time-axis in labour-theory, data points not being static, but representing different stages in different process instances, so they can’t simply be manipulated arithmetically, we get an Einsteinism that encapsulates the focus on conceptual “general intellect”. Labour-theory, or any worthwhile economic theory …

… belongs to a class of ideas that Einstein described as “principle theories”: theories whose aim is to capture the essence of reality in a simple proposition, which may be removed from everyday experience. Einstein wrote that the aim of science is to capture the connection between “all experiential data in their totality” – and to do this “by use of a minimum of primary concepts and relations”.

The more clear and logically unified these primary concepts were, the more divorced they would be from data. The truth of a theory is, for certain, borne out by whether it successfully predicts experience, but the relationship between the theory and the experience can only be grasped intuitively.

Mainstream economics evolved into a pseudo-science that can only allow for statements obtained through crunching the data. The result is a neat set of textbooks, which are internally coherent but which continually fail to predict and describe reality.

Fascinating. My whole agenda in there. Autistic economics for sure, but also the weirdness of causation in general. The myth of empirical objectivity, when it comes to static data and predictive theory. In the quest to appear “scientific” – a word that monopolises credibility – any socio-politico-economic theory misses what science itself fails to notice. Reality is not scientific.

This much is given. But there are a couple of more contentious points.

Information IS immaterial, as Wiener (and Dennett) say. But Landauer (IBM) was right, it’s physical representation is always physical, whether in silicon or in synapses. It’s these physical representations that may tend to zero marginal cost. The information content remains independent of its physical embodiment however.

Secondly, information is NOT knowledge. Data and information may be freely networked – without any intrinsic hierarchy – but know-how and wisdom do come in layers. Mason himself is using the universal-knowledge / general-intellect ideas. These are free to the human that has them, and whilst their distribution is not controlled by any pricing model and their information content can be “shared” democratically, marginal possession of intellectual capability to understand varies hierarchically.

He’s right about one thing however. Once such increased knowledge can be embodied in machine processes, it can be standardised in all machines. Standards (my life for the last two decades) are information products that can be freely distributed. Generic intellect is always ahead of this game – unless machines can also think. Appropriately, AI is Mason’s next topic.

Unfortunately, he doesn’t pursue this beyond increased automation, whereby not only the labour inputs, but the machines themselves – real or virtual – also tend to becoming free through “repeated applications of info-tech”. It’s the repeated algorithmic application of information patterns upon information patterns. There’s no AI; no machine thinking. But if both labour and machines (capital) lose any marginal (financial) value, effectively becoming free, what is capitalism left with?

Atomisation of Markets & Labour @paulmasonnews

Reading Paul Mason’s “Postcapitalism — A Guide to the Future” after earlier mentioning the previews and as is my wont recording some notes around the mid-way point. That is, I don’t really know his conclusions for future action yet, but as previewed it is indeed full of material I already identify with, indeed have been studying and working with for a couple of decades myself.

As Mason indicates “big business” has been looking at the democratising nature of information connectivity on more networked peer-to-peer, bottom-up or individual-node-outwards means of exchanging value for quite some time. He credits Drucker (as I often do) as one of the first to recognise how this game-changed traditional capitalist economic models since copiable & shareable information is NOT a scarce resource controlled by supply and demand pricing. IPR is an “artificial” legally enforced arrangement – or not for those who go open-source or creative commons routes. Without supply and demand pricing the value exchanged is “social capital” between individuals and their self-identified groups. Hierarchies are gone from such socially networked arrangements — or are they? I’ll need to come back to this point.

So for now, not only does networked information undermine resource pricing models, its ubiquity runs more and more deeply through more and more “products” and their positive “externalities”. All “markets” are affected, not just those explicitly in information products.

Much of this is not new, after Drucker, himself after Kondratieff (*), Schumpeter and Parker-Follett, Mason also credits Kevin Kelly and many of the visionary celebrities of the “wiki” generation. Business, tech business or otherwise, is of course continuing to find the best angles to extract financial value from the impending Internet of Things. But few have yet really adapted to a post-capitalist situation where financial gain based on monetary value is no longer the main part of economic reality. “We” are really valuing networked social value.

The “we” is important too.

Mason is an economist, and in building to the above assertions, he gives us a good history of national currency value relationships with any tangible markets. Gold-standard, Bretton-Woods, banking reserves and so on. And a great deal of fascinating — and scary — stats of relative wealth and growth through the 4th Kondratieff wave.

Agreements between nations have been crucial to the stability of economies — gentleman’s agreements or formally institutionalised rules. Globalisation in the broadest business sense, international freedoms of and access to human and physical resources, already weakens such inter-nation controls, and of course a globalised network of socially communicating individuals further destroys their power.

One of Mason’s threads is that this “atomisation” of individuals — both as labour and as consumers, was largely engineered by traditional (and neo-liberal) capitalist free-marketeers, as a means of minimising costs and maximising revenues for their traditional businesses. What is not being addressed is the fact that this same atomisation of networked individuals has undermined the pricing model for future business, beyond those that make hay from their near monopolistic tech positions. The Googles, the Amazons etc. But of course since the social values are moral rather than financial a monopolist can be just as effective if they choose a free-to-consume business model. The Wikipedias, the Androids and Linuxes. Ownership as a choice is no longer easy to enforce and exploit if any one or more players chooses not to play that game.

Value becomes (always was) a matter of morality, not finance.

But individuals appear on the supply side of the market too. Labour has lost its solidarity. We have lost our common identities. Demographies are simply post-hoc statistics. (Interestingly, it was Paul Mason I was quoting when I used the expression “bogus identities” in my Identity Politics piece. Mason was using it in the context of Greece vs Euro and the migrant crises.)

Many useful quotes recorded from the first half of Mason’s book, and his Chapter 4 on “The Long Disrupted Wave” is recommended in its own right for the assembly of evidence it presents, but for now this:

[This story so far] is just another way of saying what Benkler and Drucker understood: that info-tech undermines something fundamental about the way capitalism works.

[However] none of the writers I’ve surveyed above achieves [a description of what the dynamics of a post-capitalist world would look like].

But what if somebody did anticipate the information-driven fall of capitalism? What if someone had clearly predicted that the ability to create prices would dissolve if information became collectively distributed and embodies in machines? We would probably be hailing that person’s work as a visionary. Actually there is such a person.

His name is Karl Marx.

Time to read on. [Continued here …] [And finally here …]

=====

(*) Kondratieff? I’ve always referred to as Kondratiev, and I see on Wikipedia the former redirects to the latter spelling too?

=====

Additional Reference Notes to be elaborated:

P6 they knew didn’t work. P11 basel 2 license to game the system P15 trust p24 British miners Foucault Minitel p25/26 network squared smile outside market. – Virtual trophy virtue – p77 final observation markets outside current economy. P79 on. The long cycle. Positive national story. P85 Shannon Turing Drucker. Strategic innovation – profit driven production. P86 automation. P90 all parties gaming Keynes. P91 neoliberal atomism – got it. P99 winners and losers part of downswing. P102 Globalisation winners & losers. P112 Drucker post capitalist society knowledge the resource. Universal educated person not polymath, but metamath understanding pure concepts. And federal peer to peer network. P115 networked types yes, but knowledge is about knowing not info overload. P117 and on economy of raw materials and instructions not finished products. IPR legally and socially enforced. Shareable not consumed – Dennett info independent of physical representation. IPR is about prevention. Apple mission. Breathless Kelly dot com boom etc. P125 digital exceeds analogue missed conceptual point. IOT prophets of pc. Goodwill. Wiki quality hierarchy.

Post Capitalism @paulmasonnews

Paul Mason’s book “PostCapitalism” is out this week, but has been previewed in talks and articles.

Lots of material I’ve used here. Schumpeter and Kondratiev waves of economic cycles. Freeman and Perez “Techno-Economic-Paradigms” building on Kuhn. Drucker, the guru of management gurus, standing on the shoulders of Parker-Follett. The 5th wave is clearly the information driven wave – the Information TEP – products (even physical products) whose value largely comprises or depends on information. The point is that information (like love) increases in value when shared and isn’t made scarce by copying – that’s quite a shift in capitalism’s foundation. The only scarce resource is the creativity of new patterns, tools and uses. The information itself and the knowledge in people is in connected networks and therefore non-hierarchical. Again, a change affecting the established capitalist model. In many ways the thesis so far is not new – we’ve been working on it for 25 years already.

Personally, I think other aspects of markets will retain scarcity and hierarchy. Knowledge is more than information, in the same way information is more than data and wisdom more than knowledge. Mason’s thesis seems to be that the flattening of the network will destroy pricing mechanisms. Perversely, as Kevin Kelly also predicted, even where hierarchical market power remains, even if only legally enforced, it will tend – has already tended – to monopolies. When one source is easily shared, why create a second source? Hence my point, the real value-add will be beyond the content simply as shareable information.

A network of connected “individuals” – connected but independent, not a monolithic collective – will seek something different from post-capitalism. This much is true. Looking forward to reading.

Science vs Philosophy

I had to capture this one for posterity

[Post note to state the obvious. Obviously there is no either/or conflict or choice to be made, each has their own place in the scheme of things, and each should recognise the place of the other. The point of the rhetorical quip is that in general many scientists are “philosophy deniers”. I’ve yet to meet a philosopher who would “deny science”, even when aiming to point out flaws, questions and alternatives. In my experience many self-identifying as scientists are “dogmatic” about the primacy of their (contingent) science and disingenuous when it comes to proper scepticism. Scientists will (scientifically) claim lack of clarity and empirical objectivity, and even intentional obfuscation, by the philosophers, but in general the philosophers (if they’re any good) will argue more carefully and respectfully.]

The Objectivity Fetish

Interesting piece in the Grauniad today by Karen O’Donnell (a student of Prof Francesca), particularly interesting for the (male) responses in the comment thread.

At the outset, I should say I’ve no idea why it is cast as a response to (the media myths of) the Jeremy Hunt debacle, other than the Grauniad audience-attention-grabbing motive, because it obscures an important gender issue. Pity. However, that said, the point is worth making.

One of my agenda threads is “Vive La Difference” – not to deny important gender differences, differences that mean the female view has advantages that we would lose from the meme pool.

The problem here is casting the difference as “emotion” vs “objectivity” – let’s face it, an argument as old as philosophy itself. But, continuing with that language for a moment, even emotion is a valuable part of academic, research and/or (any) discourse – scientific or theological – it takes thinking to places it might not otherwise reach. In the investigative, hypothesis-seeking, exploratory, creative process passion is a powerful force. And it’s an engaging and motivating force as Karen says. Sadly as well as the gender agenda, some of the commenters have the science vs religion agenda in mind too – missing the point of Francesca’s school of theology. As one of the commenters points out arguments involving passion are every bit as important in the history of science as anywhere else.

Obviously, documenting an “argument” in support of a testable proposition – in whatever academic field – will typically require objectification of the story and, since that may include topics whose subject matter includes human psychology, objectification of the subjective content too. But this is the point where contrasting the passion with the objectivity misses the real gender point.

The point is really about how narrow and broad thinking are joined together in the human mind.

I’d recommend Iain McGilchrist’s Master and Emissary. After Nietzsche’s phrase, he is pointing out that narrow objectivity, and the logical rationale that manipulates such well-defined objects, is the emissary, the servant of our wider senses. Something Einstein understood. The constant focus on objectivity – a fetish I consider it – shuts out half of our brains. In women the halves appear to remain better connected. [Lots of left-right brain myths and men vs women myths – male inability to walk, talk and hold two thoughts at the same time, etc – arise from these differences. The myth is that these are due to differences between the halves of the brain, whereas reality is more to do with how the two halves communicate with each other permissively.]

Speaking archetypically, women are – fortunately – more in touch with their wider senses than men are. A quality we’d do to cherish. If that broader range of sense and emotion, the passions, also have motivation and engagement benefits, we’d do well to cherish those too.

See here for Master and Emissary.

See here for Vive La Difference.

See here for Left-Right Brain Myths.

Convergence of Computation, Mind and Genetics?

There is certainly a coming together of many related ideas which is very exciting, but there are some implicit assumptions in that “convergence” that blur some details that may not actually be right in any of the three fields.

This post is to record a position. The linked paper ….

“CONVERGENCE of Neuroscience, Biogenetics and Computing
– a convergence whose time has come.”
by Dr Michael Brooks

… is part of a series linking the work of Dan Dennett on the computational aspects of evolution, Craig Venter on the digital informational aspects of genetics and David Deutsch on the fundamental nature of information in physics. I’m a fan of all three, and have referenced their works multiple times in this blog, but I believe there are a couple of traps to avoid in the rush to converge:

Information & computation – the manipulation of information with other patterns of information, in real or virtual “machines” – is a very fundamental process. Information is simply “significant difference”. Possibly more fundamental than physics itself as currently understood in the standard particle model(s).

Mind & brain – cognitive sciences generally are right to see Mind & Brain as a “computer” – that is as a “machine” that does computation, but clearly it’s important not to fall into the trap of thinking of machine here as a physio-mechanical device. Computation is a many layered process, and when it comes to the computer itself, distinctions between hardware and software need not map simply to the brain and the mind. Information and computation processes are fundamentally independent of any physical substrate in which they may be represented. Independent of the substrate notice, not just independent of their representation.

At that level, avoiding the trap of over-simplifying the hardware-software view, there is lots of scope for careful work to bring these ideas together. But there is a second trap to be aware of before looking at the convergence with Genetics. That trap is accidentally assuming the digital nature of what is being considered. And there are two sides to this trap, both to do with digital objectification – one that genetics is necessarily digital, two that the computation is necessarily digital.

Genetics – is real, and it really is about information encoded in molecular patterns of bases in DNA. However, the objectification of those significant patterns as “genes” with distinct boundaries and clear definitions is part of the ontology of bio-genetic science. Useful to the science but not fundamental to the information patterns – there are a lot of fuzzy edges and apparent trash in between. We have a useful digital model of genes, but the genetics – the significance of and manipulation of the information – are not necessarily digital.

Furthermore, this same trap also exists in the Mind-Brain convergence too. There is nothing above that says either of these concern digital information. We tend to think of physical world computers as familiar digital computers, and whilst there is excitement about potential growing realisation of quantum computing, non-digital computing is actually as old as analogue computation – I know, many years ago I used to do it for a living.

In the famous Registry Assembly Programming case, the exercise is indeed fundamentally digital, and yes, it does illustrate the fundamental nature of computation. How can computation not be fundamentally digital?

What that exercise does show is that basic computation steps lead to complex processing – any unlimited sophistication – only by their combination. The underlying processes remain very simple, even when higher level languages and tools are used. The integer registries in the RAP case are themselves a representation of the information, which further represent (human) semantics. The model – the ontology – is a digital abstraction, but the information need not be.

You might argue that even in analogue computing, there are still digital particles involved – individual electrons in the electrical currents and voltages, or water molecules in the physical flows and levels – but as already noted above the information is (may be) more fundamental than even the particle physics.

Food for thought and a fascinating topic.

Islam vs Islamism

I’ve made an issue several times before of ensuring we are careful to use the term Islamism when we ought to, and given today’s upcoming announcements by the PM, I thought I’d make a brief summary of my position now:

  • Problem of Islamism – A security problem. A policy of hatred of others. Jihadism. Promotion of Islamic rule (hence a so-called Islamic caliphate) by any means at the expense of the freedoms and rights (and lives) of other Muslims, sects and non-Muslims. Solution involves both inter-national political effort and lethal force.
  • Problems of Islam – Cultural issues. The source of Islamist ideology (see above). But, also some imposition of repressive, irrational, patriarchal, discriminating and even barbaric practices on other Muslims, even where those same individuals are also members of free democratic and/or secular states. Some problems shared with other religious practices. Solutions involve intra-national political effort and legal force.

ie in taking care to target Islamism when that is what is at issue, is not to say Islam itself is without problems or that the two are unrelated. Indeed thoughtful Muslim commentators see the role of the Islam in Islamism, whilst demanding care in addressing both issues. Care that recognises that the political and cultural freedom issues also have quite independent ethnic and nationality dimensions.

Both Islam and Islamism represent problems, but different problems with particular relationships, each with different solutions and each requiring care to avoid conflation of issues and tarring all with the same brush.

Evidence Reduces Trust

A couple of readings and conversations – face-to-face and social-media – recently, that play directly into my agenda of keeping science and humanism honest, and expose where I’m at odds with received wisdom. I’m used to it after 15 years of blogging and, of course, countering with alternatives to received wisdom is the point. I’m not simply being contrary, there are important alternatives being overlooked. Received wisdom is simply a tyranny of the majority.

A number of campaigns I support, many of which fall under Sense About Science, make a lot of sense (obviously) and their intentions are laudable. Laudable enough to actively support as immediate if temporary measures, efforts to get the topics on the public agenda, curb current excesses and abuses of what passes for scientific knowledge. Starting from a ground zero of ignorance and denial, then all progress is positive. But …

But, there is a kind of arrogance that says being right follows from making progress. That’s evolution, innit? And there is a valid line of thinking that says so long as we make progress, who cares about being right. That’s politics, but it’s not science. In politics there will be values, but rarely any concept of ever being fundamentally right. Science on the other hand, whilst knowing it is never right, always contingent, does care about approaching knowledge as truth. If it doesn’t it’s just politics. And this is the Catch-22 again, when you have a political agenda around science we have to be careful to distinguish the politics from the science.

One of SAS campaigns is “show me the evidence” and a corollary of that one is “show me all the evidence” including the null and negative indications, particularly in (say) Ben Goldacre’s #AllTrials demand for publication of all clinical trials, including the failures. Who could argue with that?

Me actually. This is a political extension of the openness and transparency of all considerations and communications. Leaving aside any issues of privacy and security, this may be pragmatically fine from a freedoms and rights perspective, but what is completely impractical is that we all need to consider all available evidence and information. At some point we have to trust the knowledge we’ve got so far and trust the people with & sources of that knowledge. Asking to be shown the evidence is a statement of mistrust or a default to zero trust in the absence of evidence. So it is clearly a judgement where the process stops – when you have enough evidence to trust. That clearly depends on context.

So for the #AllTrials case, where the responsible and expert licensing authorities are in the loop, it will probably be practical to set some rules about disclosure to those bodies (transparently available to anyone, too)(*). The evidence of trust shifts to our relationship with the authority. In the more general “show me the evidence” case, the practical limits will always be a matter of judgement. Evidence that is easily available – and intelligible in all its subtle nuances to whoever is interested – should never be ignored, but we should not expect to see scientifically objective intelligible evidence to support every judgement. (This is Dick Taverne’s argument, and in fact he is a founder of SAS.) The need to trust judgement never goes away, it just gets pushed around. Trust is inevitable and it is where the science and the application of science must part company. Trust, like scientific knowledge, is something we should work to maximise, we cannot entirely replace one with the other.

And there are other competing factors that mean it is counterproductive to pursue the objective evidence line exclusively. One is we will never succeed in achieving watertight definitions of all the objective evidence needed for all situations. And the tighter and more comprehensive such attempted definitions become the more unlikely the nuances will be understand by more people. Simpler communications may give the illusion of wider understanding, but that understanding will be at the expense of actual scientific truth. It may be politically sound to pursue that kind of science communication, be we must be careful not confuse it with the actual scientific knowledge. At some point we always need to trust that the specialist scientists, like the responsible politicians, know better. It’s an illusion to believe we can drive trust out of the system.

Definitions and objective evidence are part of science’s model of the world, and the human world is more than that.

=====

(*)Note: Though even here, where management of the rules and their application has clear authority, it is already possible to predict gaming of the system, whereby potential failures are tested under the radar before bringing into the regulated environment. Rather than selective publication of results we get selective “official” testing. Unintended consequences. ie the devil is in the detail of the execution and management, not in the definitions of the rules and processes. Definitions don’t solve the problem.