Calibration at its core is a comparision between a known value and the unknown with a simple expression of pass or fail based on the specifications of the device under test (also know as DUT) and whether the measured value falls inside or outside that specification. With that said, the calibration of inspection and test devices has evolved into a virtual user pick-your-own-adventure experiment and depending on the level of risk you can accept, the traditional pass/fail statements may not suit your needs any longer. Simply stating that your gage passed or failed a calibration does not tell the whole story. Yes, the certificate of calibration says that it passed or failed but some questions must be asked. Such as, what considerations did the calibration laboratory use for making this statement? Do you as the customer have any input what rule is chosen? What rule is right for me? Unfortunately, passing or failing calibration results can’t be boiled down to simple statements without further explanation of how the acceptance statement was arrived at or what considerations were made by the laboratory when making them.


Considerations In Statements Of Conformance

The key parameter to understand at the very beginning of breaking down our understanding of statements of conformance is what calibration is. The International Organization of Legal Metrology has done so within OIML V2:200-2007 International Vocabulary of Metrology as: “operation that, under specified conditions, in a first step establishes a relation between the quantity values with measurement uncertainties provided by measurement standards and corresponding indications with associated measurement uncertainties and, in a second step, uses this information to establish a relation for obtaining a measurement result from an indication.” At first glance, one might ask what the difference is between my simple definition and the formal one. Well, the VIM expressly mentions measurement uncertainty and its relationship to measurement results. The definition does not mention any form of adjustment, but for the sake of keeping this guide condensed and related only to decision rules, I won’t wade into that controversial divide.

Q-cast logo

LISTEN TO THIS ARTICLE

Listen to more podcasts here.

The definition of calibration provided by the VIM, the pass/fail compliance statement cannot be provided fully without some consideration for measurement uncertainty. By referencing the VIM again you’ll find that measurement uncertainty is formally defined there as: “non-negative parameter characterizing the dispersion of the quantity values being attributed to a measurand, based on the information used.” Since this definition is somewhat abstract the VIM further defines measurement uncertainty in Note 3 as: “Measurement uncertainty comprises, in general, many components. Some of these may be evaluated by Type A evaluation of measurement uncertainty from the statistical distribution of the quantity values from series of measurements and can be characterized by standard deviations. The other components, which may be evaluated by Type B evaluation of measurement uncertainty, can also be characterized by standard deviations, evaluated from probability density functions based on experience or other information.” This description of measurement uncertainty is the typical framework of ISO 17025 accredited calibration laboratories measurement uncertainty calculations.

Thus far, we’ve established a baseline for calibration and how essential measurement uncertainty is to calibration, we’ve also just slightly touched on measurement uncertainty and the basic framework for how it’s calculated. After reviewing these two subjects, a consumer of calibration services may assume that the calibration laboratory has fully considered measurement uncertainty for me and provided the lowest risk option when making the compliance statement. But that assumption might not be true. There is a chance that the calibration provider is only providing simple acceptance, which can lead to up to a 50% chance of false acceptance or false rejections. Which at the end of the day may increase risk to you and your customers, to a point that is not acceptable.

Per ISO 17025:2017, you as the consumer of calibration services must agree to the decision rule selected, so understanding what basic types are available is critical to reducing risk and increasing your awareness to the potential risks of each. Please do not automatically assume that the laboratory has done an analysis for you that meets your needs. You are responsible for any additional risk that results from a calibration that does not directly consider uncertainty in a way that maximizes the reduction of risk. Therefore, it is critically important to be clear with the calibration provider what type of decision rule you require and you to know what level of risk is acceptable based on the processes the DUT is part of.


Please do not automatically assume that the laboratory has done an analysis for you that meets your needs.


Typical Types Of Decision Rules

There are many types of decision rules available, many of the most common are readily available for your review within ILAC-G8:09/2019 Guidelines on Decision Rules and Statements of Conformity. The ILAC Mutual Recognition Arrangement has signatories all over the world and there are six in the United States. You also might recognize their logo on calibration certificates you have received. Within ILAC-G8 Appendix B there are three common examples of decision rules you might have seen from calibration providers. Below I’ll explain examples of those three different decision rules and their benefits.


Simple Acceptance

The simple acceptance rule is the most commonly provided, mainly due to ease of use by the calibration provider and lessened analysis required of the end user. The simple acceptance rule only requires the measurement result to fall within the DUT’s specification for a passing result to be issued. For failing results to be issued the measurement result simply falls outside of the DUT’s specification. Easy right? Well maybe. Keep these two caveats in mind though if selecting this method, the calibration laboratory must maintain a (TUR > 3:1) for this to be a valid choice without further analysis and that the simple acceptance rule allows for signifigant false acceptance and false rejection rates of up to 50%. Why? The simple acceptance rule makes no consideration for where the measurement falls within the acceptance interval or direct consideration for measurement uncertainty, its presumtive measure in that it is assuming (TUR >3:1), in many instances though a (TUR >3:1) may not be acheivable by your calibration provider. An end user requesting this approach would certainly be wise to evaluate what level of risk they can tolerate, as this rule offers the highest global risk of any of the three choices discussed here.


Non-Binary Acceptance

The non-binary acceptance rule incorporates a guard band consisting of the measurement uncertainty. This rule is less common than the simple acceptance rule. However, end users who have identified specific risk to their processes from false acceptance or false rejections request it. It can, when applied in a manner that does not accept any results falling within the guard band result in up to 2.5% chance of false acceptance or false rejections. However, when applied at face value and all conditionally passing or conditionally failing values are considered acceptable the chance of false acceptance or false failure can increase up to 50% the same risk amount as a simple acceptance rule. This method requires careful examination of the location of the measurement results in either the conditional pass or conditional failure ranges to determine how much risk would be taken on by accepting any conditional results.

Additionally, a calibration provider who offers this acceptance rule must be able to provide a certificate identifying these conditional data points and will typically rely on robust calibration software to do the work of automating the generation of their certificates.

In my experience, many end users will utilize this rule and the guard band in the conditional passing zone to act as a warning for replacement of the DUT if it is a fixed limit type or identifying opportunities to adjust the DUT when possible.


Binary Acceptance Rule

The binary acceptance rule again uses a guard band. However, the acceptance interval is calculated using a root sum square method with the DUT tolerance limit and the measurement uncertainty as contributors to the calculation. This rule does reduce acceptance interval of the DUT. End users of this rule require an absolute pass or fail statement. This rule offers the lowest global risk of ≤ 2%, thus making it suitable for users who require the lowest levels of risk.

Here again calibration providers who offer binary acceptance rules will typically have robust calibration software to calculate the guard band and the resulting acceptance decision.

This in my experience is the least common approach. However, it is preferable in reducing significant risk from calibration results.


Conclusion

This how-to covered what calibration is, the primary factors when making statements of compliance, and examples of three common decision rules. The three rules covered are not the only available types. From a “new to this” perspective, these are easily the best examples to demonstrate as ILAC G8 and OIML V2:2000 (VIM) offer the most practical and valid explanations available. Both ILAC G8 and OIML V2:200 (VIM) are free and available for download at the ILAC and OIML websites: https://ilac.org/ and https://www.oiml.org/en.