Mediated Tweeting

I have an agenda that says free social communication isn’t all good, in fact it can be positively counter-productive. Quality communications, leading to quality learning, decisions and actions, benefit from editorial control and goal-directed mediation. Google doesn’t make teachers redundant.

Interesting today to see TechCrunch extolling Twitter coming of age – as a vehicle for communicating links between other channels it’s great, no argument, BUT.

They mention but don’t highlight in this story that the actual communications to the human involved (the US President in this case) was via expert human mediation, filtering and editing. Couldn’t work any other way, until AI comes to stand for Actual Intelligence, which it will, one day.

The medium is the message, but it’s a different message.

Google+ buzz = new Wave ?

I’m liking the buzz around Google+, and from seeing only the free “tour” (no working account yet), I like the fact it’s the relationship and not the person that is the focus, as was the case with Wave. Groups (circles, hangouts, huddles, etc.) arise from the nature of the relationships, not limited to the crass friending and following paradigms – which maybe made sense in the original university / college campus environment, or early-learning steps in social media, but are just too – well – crass for the real world.

Wave had it right because the “Waves” were emergent from the communication activity, not defined by groups of (yeuch!) friends. The only thing wrong with Wave was how to present the enormous power in a sufficiently usable UI – perhaps the social paradigm for the Google+ UI will work. Hopeful. (Sadly, TechCrunch appears to have a politically motivated agenda against it succeeding.)

Project Management Memetics

Leon sent me a link to this paper a couple of years ago, to which I responded “interesting” – he knows I’m interested in memes. I didn’t actually read beyond the title until today.

The essence of memes is that there is something “self-serving” about patterns of information (*1) which is independent of any rationally intended human purposes in using them. The same is as true of (say) project management procedures and practices as it is of any rational processing of information – my agenda is that this is a problematic feature of management and governance in the most general sense, not just businesses and projects, any decision-making-to-act process, knowledge-management practices, even the rational domain par-excellence science itself. So I have no doubt about the problems of failing to see the memetic aspect of project management activities – it’s is of course where my concerns began in Oil & Gas industry and Information Management projects, 15 or 20 years ago – the reason I’ve been blogging since blogging was invented …. but this is not about me.

In fact none of this is new in management circles, just the new(ish) memetic language, and part of the problem now is that memetics itself is contentious to some people (*1). But even without memetics, the idea that decision-rationality = action-irrationality has been part of action-science management theories (eg Argyris / Brunsson et al) and probably longer before that with (say) Parker-Follett – guru to the gurus in management.

In any “professional” management situation it is difficult (anathema) to suggest that doing a rational thing is the irrational (wrong) thing to do. You’re mad, surely. “Before we make this decision to act, we should study and agree upon this issue – right ?” Wrong. Act and experience the outcomes (with “care”, in the knowledge of the issue). It’s been called analysis-paralysis for years, but it’s not just “analysis”, it’s following any rational, objective process that delays action, because it is the action that provides experience. Experience is worth more than theory, in practice.

Performing rational (project) management analyses, modelling and management decision-making processes tends to lead to more (project) management activities – ie self-serving – rather than achieving the value-adding goals of the enterprise or project. (IT / IM projects, particularly new, integrated business and/or government (civil or defense) systems, are often legendary in terms of project failure, however they are actually post-rationalized. Not surprisingly there are newer “agile” IT project management processes that force the action and feedback cycle milestones.)

(*1) Patterns of information, known as memes because they are copied (not the other way around), come in many levels; patterns (upon patterns) upon patterns of information (statically defined) and patterns (upon patterns) of their (dynamic) relations, procedures, patterns of use, communication and processing. Because genes – the biological analogue of memes – are based on 4-bases (*2) and n-chromosomes in any given species (*3), there is a popular misconception that genetic copying in biological reproduction is well defined in terms of atomically discrete “digital” genes, whereas memes are somehow more woolly – anything from a single word representing an identifiable concept to the whole idea of ideas, concepts, interpretations, representations even internet crazes, fashions, cultural patterns (even whole religions and cultures) etc. Many people baulk at the idea that “cultural units” (memes) can be considered as discretely as “biological units” genes. Now, reducing things to discrete objects (genes or memes, or anything else) is part of a wider issue, but genes and memes, their own definitions and the processes and patterns involving their transmission and reproduction are equally complex and ultimately flaky – just equally useful in describing the processes involved – information processing processes both (*4). The analogy is in fact a very good one. It’s about what IS copied and communicated, not prescriptive about what they should be, or how they might be represented when communicated and processed. Naturally, simpler patterns of information (memes or genes) – patterns of information which are simpler to represent – are communicated, processed (and replicated) more easily, so unsurprisingly discrete objects are much more “popular” than complex patterns of information – another self-serving aspect. Simple ideas rule, but often simple may be dumb.

(*2) Even the 4 DNA / RNA bases are not in any sense absolute. They just happen to be the basis of the most prevalent and most studied organic biological forms. Other biochemical possibilities exist. And of course even in R/DNA based life, there are many other non-R/DNA cell structures involved in the processes too. Doesn’t change the essential pragmatic truth of genetic reproduction.

(*3) And even the definition of a discrete species is highly context dependent and controversial when it comes down to it. Different definitions are accepted for different practical purposes.

(*4) Objective reductionism is full of contentious topics when it comes to more subjective things like free-will and consciousness, but this is true even at the most fundamental levels of physics too. Arguments in these topics need to be conducted extremely carefully – avoiding “misplaced-objectivity” and “greedy reductionism” – more self-serving memes.

[Need to come back and link to the implied sources throughout.]

[Post Note : Existentialism and Evolutionary Psychology – Heidegger, Foucault, Dennett and many more in Jon Whitty’s project management presentations. A man after my own.]

Nuclear Sense of Perspective

Nuclear Power radiation risks are largely in the mind.

[Post notes thanks to Facebook activity.

From Smiffy http://understandinguncertainty.org/node/1272

From Smiffy and XKCD http://xkcd.com/radiation/

From every man and his dog, George Monbiot is a convert http://www.guardian.co.uk/commentisfree/2011/mar/21/pro-nuclear-japan-fukushima

Anyone who was already pro nuclear power, and has had their beliefs reinforced by Fukushima, probably already recognizes the real risks … not the radiation, but the radioactive materials entering the body – giving you a long ongoing personal dose – from escaping materials – airborne / waterborne from a loss of containment … including long term processing and storage of fuel and spent fuel.]

Information on Trust

Trust and information go hand in hand. There is no information without trust. Limited data maybe, information of real value; no.

Interesting to read this piece on Three Mile Island in the light of the current Japanese problems:

“The understated equivocations of their spokesmen – and their genuine uncertainty about the situation – engendered mistrust, particularly among those in the vicinity. Media coverage citing concerned nuclear experts served to heighten fears.

Soon, misinformation about a hydrogen bubble, which had formed in the containment vessel after zirconium fuel rods were exposed, turned into full-blown and mostly unfounded anxiety about an atomic explosion.”

Mostly Unfounded, yet, despite a massive (but contained) meltdown seen with hindsight only, a monumental event historically, created by Media Coverage.

Perversely and counter-intuitively yet again, less is more – less communication is better – yes, free communication makes things worse. Is that a political statement ? If I were a conservative-techno-phobe that would not be an interesting statement, but I’m a web-savvy-liberal. Must I post the W3C Fig 7 picture again for the techies ? Trust at the top – clearly trust and information feed off each other, but it’s the trust that’s paramount.

Working thesis: Current information value depends on a current stock of trust, current trust depends on previous experience (of information, and action, and … ) not on current information. No amount of “data communication” now, can fix a pre-existing lack of trust. Something like that 🙂

Macondo “Permitorium”

Listening to a presentation from the International Association of Drilling Contractors on the Macondo fall-out.

Demands for containment resources x00% x max spill potential available on site or within x hours are being used to reject permits to deep water drill since the moratorium ended in October. A little bit “no spill ever again” level of safety demand before permits will be granted. At least a year of deepwater drilling industry shutdown in the US gulf, which is a major regional industrial depression well beyond the O&G companies.

(Incidentally – innovative capping containments also being developed internationally. Ixtoc 1979 was bigger and flowed for a whole year. See previous Macondo threads and comment threads.)

Great Wall Drilling / Hashwe(?) / Repsol / Saipem / Gazprom / Statoil / Pertamina / ONGC / PetroVietnam / Petrobras and other partners, drilling in deep water (1 mile deep) in loop current between Cuba and Florida, with flows at 14 knots towards Florida and Carolina Atlantic coasts, and/or Cuban coast, not of course regulated by US permitting. Worse still …

People have already been prosecuted heavily for US content of technology (see partners) delivered indirectly to Cuban drilling industry. US (politically) cannot provide BOP or containment technology for a drilling operation that threatens the US coastline. People are trying to “do the right thing” without getting fired for legal infringements, amongst the political regulation. Interesting angle.

More on Macondo

I’ve now had time to read the whole US Commission report on the BP Deepwater Horizon disaster in the Gulf of Mexico – the discussion sections that I’d not read earlier, in order not to be influenced, when I published my initial conclusions. It is ever clearer.

“Most, if not all, of the failures at Macondo can be traced back to underlying failures of management and communication. Better management of decision-making processes within BP and other companies, better communication within and between BP and its contractors, and effective training of key engineering and rig personnel would have prevented the Macondo incident.”

My emphasis this time on their positive use of “would” – ie without doubt. My own agenda here is to pick up those communication and decision-making aspects of business management systems, but as an engineer in the downstream business and as a human, you have to feel for the guys who made the mistakes and struggled with their consequences, in many cases to their deaths.

It’s a long time since BP has been a “British” company, and any finger-pointing between BP and Haliburton an Transocean is unhelpful. Creditable to notice lines in the official (US) report like

“As BP’s own report agrees …”

compared to

“Halliburton has to date provided nothing … “

or

“Haliburton should have …”

My point is that the responsibility is shared industrially (as the report concludes), and I see BP taking its share.

I make that point because I did make an observation earlier about the hairy-arsed “wild-catting” culture present at the sharp end in this industry, with a US frontier freedoms mentality wherever in the world the operation is. Any sophisticated business managing such operations – however good BP is – would be unlikely to change that “by design” and in fact should think hard before attempting to do so.

Remember this was one of the largest, newest and most sophisticated rigs in the world. There is a recommendation about the control and monitoring systems in use, particularly during the fateful period when the “kick” had already started and the fatal blow-out was on its way :

Why did the crew miss or misinterpret these signals? One possible reason is that they had done a number of things that confounded their ability to interpret [the] signals ….

In the future, the instrumentation and displays used for well monitoring must be improved. There is no apparent reason why more sophisticated, automated alarms and algorithms cannot be built into the display system to alert the driller and mudlogger when anomalies arise. These individuals sit for 12 hours at a time in front of these displays. In light of the potential consequences, it is no longer acceptable to rely on a system that requires the right person to be looking at the right data at the right time, and then to understand its significance in spite of simultaneous activities and other monitoring responsibilities.”

Hard to argue with that ? But, very important to distinguish decision-making from decision-support. You (we all) are relying on a tremendous amount of experience and judgement, not to mention risk-taking balls, at the upstream sharp-end of the business, drilling into the unknown. There will be blood ? Hopefully not, but it is part of the risk. There are some clear management and control-system safety-critical steps in all these processes, which need to be treated as such, with fail-safe steps needed, but we need to be careful not to (try to) automate all risk out of the system. People are highly ingenious at bypassing systems that prevent them doing their job. Applying controls in the wrong places can counter-intuitively increase the risks. We need systems that support people doing their jobs, not take them out of the loop entirely. There is good reason why the human eye is brought to bear on these processes. Proper risk assessment is one thing, but knowing when to do it and what to do with the result needs focus.

There are a number of other things also borne out by the report.

If you’ve never actually experienced a disaster first hand, it is difficult to appreciate that one is actually taking place, denial is naturally human – the hope for anything but that. By definition, the safer industry in general, the fewer participants have the necessary experience. The captain of the Titanic comes to mind. Drills and simulations of the worst case risks become so important to take seriously. This point is so important it makes it into the summary paragraph above.

Integrity & pressure testing is something of which I have considerable experience. Such testing inevitably occurs late in the process, as early as possible naturally, but nevertheless towards the end of the job. Inevitably the consequences of failing such a test can therefore have great business delay, cost and rework consequences, and all the attendant contractual responsibility wrangling that might entail. So, paradoxically, it is at the integrity / pressure test point when you most want failure to occur. Such tests may be potentially destructive by design and if it’s going to fail, this is precisely when we need it to happen, when the health and safety risk is lowest and the business value risk almost at its peak. You need to be looking for failure here. It takes balls to fail a pressure / integrity test, and the people & processes here need real authority and independence from the business productivity roles. I already mentioned the need to acknowledge safety criticality in levels of surveillance and regulation imposed from outside the working team. Again the report (and BP’s own actions since their own investigation) well recognize this issue. There really should have been (almost literally) alarm bells ringing before this test process even started. It could hardly have been more critical.

From the most significant failure point to an incidental one, though both are examples of communication of information for decision-making in the summary paragraph; The confusion about whether or not the specified spacers had actually been delivered and available as the correct type (design-class), affecting the decision as to the spacing arrangement actually deployed. Several ironies in that inconclusive chain of decisions, that provided the unfortunate quote used as the headline in the report.

“Who cares. It’s done … we’ll probably be fine …”

Supply chain confusion about the type of materials actually delivered and available. How hard can it be for supplied items to be marked and systems informed with their true class (type) ? One for the information modelling and class libraries aspects of the ISO15926 day job.

The BP Commission Report

Still digesting this

They were operating on well-known and understood tight margins on pressure balance ever since the incident during partial drilling by the earlier rig, and right through completion of the drilling to the final “primary” cement job. That balance was always between too little (mud, pressure, cement, etc) failing to control the hazardous hydrocarbons, vs too much (mud, pressure, cement, etc) destroying (the value of) the well. It may seem scary to lay people, but this is always what engineering is about – difficult judgements by responsible, moral people – we’ll “probably” be OK. It looks like “cost-cutting” to do less, but we all cost-cut (look for the best price, the most cost/value effective) every day.

[At this point, I’ve only read as far as the end of the cement design and analysis – ch4, p102 – and I’ve not seen any mentions (yet) of the problems and risks associated with the BOP systems, or the top-sides relief systems, serious but secondary – but I’ll hazard a guess (based on earlier reading of BP’s own report) that the real failure is the decision to ignore the failed negative pressure test (!), and the failure of any warning / criticality signs in BP’s higher supervisory management systems that this whole operation was on tight margins, which could have enforced double checks on the safety-critical decision points, like this one, and other additional quality surveillance. As I said earlier the irony is that BP were one of the first to introduce “criticality” ratings to the industry, 25 years ago.]

So, continuing, reading on … a quote from the commission report (their italic emphasis, not mine) and even with hindsight their use of the tense “would” – is telling.

“At the Macondo well, the negative-pressure test was the only test performed that would have checked the integrity of the bottom-hole cement job.”

And later …

“It was therefore critical to test and confirm the ability of the well (including the primary cement job) to withstand the under-balance.”

The visiting execs and the new trainee in the team both add to the dynamics of dealing with the apparent problem at a critical moment in what was already known to be a critically-balanced situation – interesting. And then the fateful error :

” … the 1,400 psi reading on the drill pipe could only have been caused by a leak into the well. Nevertheless, at 8 pm, BP Well Site Leaders, in consultation with the crew, made a key error and mistakenly concluded the second negative test procedure had confirmed the well’s integrity.”

After that, yes the BOP’s should have been a last line of defence, but weren’t … it’s history … Having been in the pressure testing position myself on several projects, I feel for Anderson …. was he amongst the dead, I wonder ? [He was.]

The recommendations need reading in detail, but this looks like systemic management / surveillance / regulation system needs, so that what look like normal processes in abnormal situations don’t (accidentally) skip critical checks. To their credit, BP still seems to be taking the full hit of responsibility, but I doubt BP is special in this respect.  These are industry needs.