References to Yuval Noah Harari’s “Homo Deus – A Brief History of Tomorrow” are coming thick and fast:
Andy Martin in The Independent “Transhumanism: The final chapter in humanity’s perpetual quest to be kitted out in comforting accessories.” (Previous brief reference solely on Information aspect).
Philip Kitcher in the LA Review of Books “Future Frankensteins: The Ethics of Genetic Intervention” (ht @KenanMalik) which is combined with a comparative review of “A Crack in Creation – Gene Editing and the Unthinkable Power to Control Evolution” By Jennifer A Doudna, Samuel H Sternberg.
Rory Fenton in the New Humanist – “Will progress kill humanism? – the idea that scientific knowledge might one day undermine democratic values.”
I still need to read Homo Deus and digest all three reviews, but initial thoughts are as follows:
In reverse order starting with Fenton in the NH, there is indeed a risk, one that exercises me daily, that science is indeed in danger of undermining our human values and democratic freedoms. But that is a narrow ill-conceived, an arrogant populist kind of science; a too reductionist, too objectively deterministic conception of science. It’s a dogmatic ideology I tend to refer to as scientism.
Fenton summarises one aspect of Harari’s position with the following:
“For humanists, free will is absolute, our sole driver. But advances in both neuroscience and computers are undermining this view. Harari cites experiments that seem to show a decision is made before the person is actually conscious of that decision, and fascinating experiments with people who have had their left- and right-brain hemisphere disconnected, who will then justify the same decision with different logic, depending on which hemisphere is being stimulated. Advances in computers that make machines intelligent, if not conscious, leave more scientists convinced that a similar computer-like process must govern the mind. Harari paints a picture of the brain making decisions automatically, which the conscious mind then justifies and takes credit for.”
At this point I can’t be sure whether this says as much about Fenton as it does Harari, but that paragraph captures both the dogma and the incoherence of the scientistic position. In doing so, we would be right to fear a disaster if machine automation were permitted to embody such a flawed scientific conception of reality. Those flaws become out of sight, out of mind, ignorantly accepted and ever more remote from human correction with the increasing pace and scale of automation of machine-learning, processing and control, and their embodiment in invisible layers of algorithms within the web on which we depend. True rationality, true free-thought-based Humanism, needs to get a grip on the the reality of flaws in our scientific model before it accepts their mechanisation.
An irony often mentioned here is the implicit importance of democratic freedoms to “free-thought” humanism – maybe not “absolute, sole driver” as suggested above but pretty fundamental – yet falling hook line and sinker for the determinism of science banishing our conscious free-will from the functional picture. It has become a cliche to cite the Libet (and other) experiments supporting “a picture of the brain making decisions automatically, which the conscious mind then justifies and takes credit for” in support of that ludicrous position. (Much written about here, see also post-note.) Neuroscience and information science are undermining human freedom only because they are reinforcing the flawed – objectively deterministic, reductively scientistic – dogma rather than actually advancing. That dogma is in danger of becoming a bar to the self-correcting evolution of science and rationality itself. Let’s not automate it before we fix it by freeing the dogma.
Malik tweeting a reference to Kitcher’s review refers to:
“The superficiality of Yuval Noah Harari’s post-humanism.”
Amen to that. In fact it’s a summary of a quote from Kitcher “The gods glorified in the post-humanism of Homo Deus are capricious, superficial, and cruel” Malik’s thinking is generally nuanced and high quality, so it would seem to bear out my position on the rather pale imitation of scientific rationality being presented by the science offered.
Fenton is non-committal on his own position, simply presenting Harari’s warning:
As [flawed] science and technology undermine concepts of free will and a true inner “self”, Harari foresees a threat to the prospect of a world in which we value the uniqueness of each person and trust them to make their own lives. This is not something he necessarily welcomes; rather, his book serves as a warning about where we might be headed.
The reason we may be headed where we humanists should fear to tread is because the science we subscribe to is flawed in a dangerously dogmatic way. The content of science may always be contingent and self-correcting, but this is a more systemic problem we need to address directly. What makes science rational?
In concluding, Kitcher says:
“Readers of Homo Deus wait in vain … for a clear recognition of what has been achieved and a sensitive reflection on how it might valuably be employed. Harari’s stampede to the post-humanist future is unchecked by ethical ruminations.”
“Humanity surely needs more grown-ups.”
That last phrase nicely captures my problem with the populism of Science-101. Good science needs to grow up and respect a wiser view of rationality.
[To be continued, more reading to be done.]
I suspect it may turn out as Martin commented, that “Harari’s Homo Deus is endlessly fascinating.” Humanists who worship the rationality of public scientists need to be just as vocal supporting public interest in the humanities.
=====
[Post Note: Nigel Warburton writing on more android than generic AI, also invokes Libet:
Libet himself left some room for control.
He suggested we can think of ourselves as having “free won’t” rather than free will …
Whether or not he was right, the thrust of much recent neuroscience is that far more of what we fundamentally are occurs beyond the control of our conscious mind …. a bleak picture of what it is to be human, but it may be accurate. Perhaps we are closer to [the robot] in some respects than we might like to think.
“Closer; Far more; In some respects”; all carefully qualified, but odd again to find the philosopher so open to the bleak conclusion, even though he brings in many other sources that I use here. I too invoke “free-won’t” as the better model. Far from being bleak, the recognition that our conscious will is very small compared to our many layers of pre-conscious action is encouraging evidence that our consciousness really is an emergent, evolved capability of an intelligent, sentient mind. Furthermore it is evidence that it has evolved to be efficient. The kind of consciousness worth having. It’s the greedy reductionists that see “small” as some minor skirmish standing in the way of determinism’s total victory over humanity.]