Sit back and let me tell you a story. Many years ago, shortly after I moved from aerospace to oil, I was approached by one of my employer’s workshop supervisors with a problem. Our company made, used and sold and item called a “casing centraliser”. This was a relatively simple piece of equipment that had two bands of steel that clamped around a section of steel casing that would be run into a well, these bands being joined by a number of bowed sections that would then hold the casing centrally with the hold (often a larger size of casing) into which it was being run. The centraliser’s main function was to ensure that, when cement was pumped into the well to seal the annulus between the casings, there would be a sufficient gap to ensue the cement could do its job. Simple, possibly crude, but effective.
Our supervisor’s problem was that the welds holding the bowed sections to the clamp bands were sometimes broken when we received them; they were made centrally in the USA and then shipped to various company sites around the world. Broken welds, we knew, meant the centraliser wouldn’t do its job as effectively as intended. When we complained to head office, we were told they were good enough and just make sure enough were used to offset the effect of broken ones. That wasn’t good enough for us (especially for me, having come from an industry and manufacturer whose name is synonymous with quality). We decided we needed to check all receipts and to scrap any with broken welds; we’d tried our own weld repairs but the cost was going to be more than writing them off. Additionally, we implemented a relatively crude drop-test to try and eliminate any where the weld was only just holding.
One issue in our minds was that quality assurance and certification was getting significant consideration in the North Sea industry by then and sending broken ones offshore would inevitably lead to client returns – something that could cost even more and tarnish our reputation.
There’s the obvious issue of defining quality – the product served its purpose, despite what we saw as poor quality. Who were we, in the UK, to say that broken welds were a sign of poor quality if the original product and service design took this into account and had worked well for many years? All we saw was the product before us and, because of our local situation where the product would be subject to more client scrutiny, we made a change – we filtered out what we saw as being sub-standard.
So far, we were enhancing product quality and improving safety (for a poor cement job could compromise well integrity. However, our actions potentially impacted on a global standard: by establishing a different standard in our region we were, potentially, compromising the standard elsewhere. Locally, there was less need to provide the extra coverage. What would happen, for example, if a North Sea trained service engineer, used to the better reliability of our “filtered” product, moved to another region using items sourced from the USA – would he/she be aware of the need to specify the extra coverage? We also sold on to other regions – were they aware of our change and what would happen if they reverted to supply direct from source?
I don’t know the answers to these questions – they’re hypothetical and well in the past. I’m using them to illustrate a potential risk of combining global standardisation and local changes. Standardisation isn’t inherently wrong – indeed, it should normally be a positive move. Neither is it wrong to apply local changes, to address the local requirements or expectations. However, both standardisation and localisation need to take the full picture into account, something that can be very challenging in anything but a simple situation. Standardisation needs to engage all parties involved (which will include some only peripheral to the core need) and include culture, custom, local practice, history and change – amongst other factors that haven’t yet occurred to me.
Many quality professionals will attribute the root cause of any potential problem to the corporate acceptance of sub-standard welds – as I did back when this happened. But what was the standard? Surely it should be that which was fit for purpose and, if the standard practice was to use extra parts to allow for the failures, then failing welds did not inherently make the product sub-standard. It’s the full picture that matters and this is where the consummate quality professional brings real value to the table – ensuring a balanced and holistic approach. Unbiased by discipline, department or company politics (or, at least, able to recognise such factors and making due allowance), the quality professional should be able to provide assurance that whatever is delivered will meet the customer requirements.
Personally, I still tend to the view I had all those years ago that welding standard was not satisfactory and that the corporate acceptance of the situation could be contributory to any problems that might arise. Unless you know the failure rate with reasonable accuracy you can’t calculate reliability with any reasonable confidence. Experience had taught the service personnel how many were usually needed in practice – at least, how many hadn’t knowingly led to problems – but (and this is purely conjecture and an opinion) I doubt a full analysis had been done to determine the level of confidence they should have.
I’m now in danger of getting into circular arguments, so time to stop. I’ll return to reliability calculation at a later time.