Monthly Archives: February 2013

The System Isn’t Working

I first heard this expression in the mid 1980’s.  It was the reason given to a co-worker at Boeing regarding why their raise, approved months ago, had never found it’s way onto her paycheck.

The system, which meant, the procedures required by the company to go from an authorization for raise in pay to being actually implemented in the payroll system was not working.  Until the system was repaired, my friend would not receive any additional monies.

I don’t know what was amiss in the Boeing payroll department at that time. It was eventually repaired and the additional pay per hour paid retroactive to the approval date – at least to employees still on the payroll when they worked their way through the backlog.

At the time, I didn’t understand how a functional productive company could have such a massive fail regarding basics such as providing employees with accurate and up-to-date paychecks.  With a few decades of experience working on quality improvement, I now have an understanding of how that seemingly inexplicable dysfunctional system can occur and why it happens.

What happens is that administrative systems grow organically in production environments.  When something ‘works’, it later gets codified into procedures.  Those procedures don’t necessarily allow for the change that is needed later as the world moves on. This can end up with paperwork channels completely  constipated to the point that nothing gets through.

If it is a problem that affects production, it gets fixed in some way, not necessarily the best, to become another layer of calcified procedures for future engineers to deal with.  If it doesn’t directly affect production, then it become a chronic situation,  like a constant low-grade headache.  Because getting raises to employees does not affect production, it is a low priority problem.  Hence, the payroll department clerk must resort to the excuse that ‘the system isn’t working’.

My idea of computing generic basis values rests on an understanding that current quality assurance inspection plans are all based on a flawed model. It works, but the more and better data available the worse the problem.

The current system isn’t working – in particular, it isn’t working for the economic production of parts made from composite materials consistently enough meet schedule while producing adequate quality with the resources they have available.

But to fix that problem requires a change upstream of even design engineers.  It requires a change in the computation of basis values in order to facilitate a change in specification requirements.  Generic basis values are, as should be expected, lower than those computed using are current system.  They are designed to apply to a community of producers as a whole rather than to a specific manufacturer.

There are a number of issues that contribute to both the reason why this problem exists and the solution of the problem, which lies outside the domain of the people who must deal with the problem.

I’ll just try to list them for now and create separate posts for each issue later.

The issues of computing engineering design values for newly developed materials, such as the B-basis for strength, is extraordinarily expensive.  For economic reasons, testing must be kept to a minimum.  However, to further complicate matters:

1.    Unlike metals, the properties of composite materials can be substantially affected by unidentified differences between different manufacturing facilities – even facilities run by the same organization and following the same processing steps.

2.    Unlike metals, the properties of composite materials can be substantially affected by environmental conditions during the life of the product.  This means that testing must be done in a variety of different environments.

Interestingly enough, despite their much smaller variability in properties, metals require far more data in order to compute industry recognized design values.

The other side of the equation is setting up criteria for production facilities to demonstrate their product is as good or better than the original sample that was used to set the basis values. The way this is done has not changed in the decades since it was first developed.  This method of setting that criteria is based on the assumption that we want to detect any difference no matter how slight.  Any detectable difference raises a flag.

Why is that a problem?  Because the more data you have and the better your analysis technique, the smaller the detectable difference becomes.  Detecting minor differences that are of no or trivial significance is a waste of time and resources.  When it happens repeatedly, this waste becomes incorporated into the cost of using composite materials. It  can significantly raise the cost of using those materials.