Data Points: Fundamental Measurement Concepts
Q. What are the basic concepts of measurement?
A. ASTM International members have a broad range of backgrounds, which are necessary for their individual fields. However, not everyone may have the same level of familiarity with measurement quality concepts. One need not be an expert in metrology to use key concepts for measurement quality. What is important is that we share the same vocabulary.
Quality concepts
Measurement quality concepts include precision, accuracy, bias, uncertainty, and resolution. These concepts apply to reporting a measurement result, to characterization of test methods, to evaluation against a specification, and to criteria for the development of both test methods and specifications. While there is much overlap between these concepts of measurement quality, they each have distinct meanings and uses.
Measurement Result
Understanding many of the measurement quality concepts begins by acknowledging that the answer yielded by a measurement is not the correct answer, not the true value, but something close to it (very close if high quality). A quantitative test or measurement result is simply one execution or realization of a measurement procedure, also sometimes called a measurement process. Repeated measurement realizations will not generate identical results, though sometimes the variation may be beyond the resolution of the instrument.
Contrast this with a measurand, which is defined as the quantity intended to be measured. This is not a measurement result. A measurand is an unknowable quantity. For a sound pressure level measurement, the measurand is the sound pressure level. It is not the reading from a sound level meter, but rather it is the actual sound pressure level.
Read More: Quantifying Probability Using the Binomial Distribution
A measurement result is considered an estimate of the measurand, or a close approximation of the actual quantity being measured. Scientists generally consider any stated quantity to be incomplete without some accompanying information on the error, imprecision, or uncertainty associated with the stated quantity.
Significant Digits
The standard practice for using significant digits in test data to determine conformance with specifications (ASTM E29) addresses a common misconception when it states, “It should not be inferred that a measurement value is precise to the number of significant digits used to represent it.” The misconception of significant digits conveying the quality of a value seems to come from an idea that rounding off digits removes error. However, rounding doesn’t remove error - rather, it inherently introduces more error. Prematurely rounding any number will have the same effect as using an instrument with a coarse resolution, discussed below.
Knowing how many digits to retain in computations was a useful skill when calculations were carried out by hand and slide rule. With computer calculations, carrying many digits is not an inconvenience, so we can use all available digits to preserve the data, including any error in those data. More digits do no harm to the measurement or the uncertainty.
Resolution
Resolution is simply a description of the capability of a measurement instrument. It is the smallest indication of a difference that an instrument can provide. This is often the smallest graduation on analog instruments, the smallest digit provided on instrument digital display, or the smallest digit provided in a digital data export. One way that instrumentation can affect measurement quality is their resolution. Coarser resolution instrumentation has potential to contribute to error or uncertainty. Finer resolution instrumentation will contribute less to error or uncertainty.
Measurement Uncertainty
Uncertainty is an attribute of a particular test result for a particular unit or material under test; it describes the range of values that could be reasonably attributed to the measurand. It is defined in the standard terminology relating to quality and statistics (E456) as “an indication of the magnitude of error associated with a value that takes into account both systematic errors and random errors associated with the measurement or test process.” When a value is expressed as X ±U, it is expressing the measurement value and the associated uncertainty.
Error is an unknowable quantity, being the difference of the measurement result and the unknowable true value of the measurand. Standard uncertainty is the estimate of a particular test’s standard deviation of the error. Expanded uncertainty is reported as a multiple of standard uncertainty. Uncertainty components can include both statistical and non-statistical evaluations. The metrologist is expected to provide a statement of uncertainty with the measurement results.
Precision
Precision is a measure of the expected agreement of repeated measurements of a test item when using a specified measurement procedure or process, or the expected variation under different conditions for a measurement procedure. There are several levels of precision corresponding with different conditions, which are discussed in the standard practice for use of the terms precision and bias in ASTM test methods (E177).
Read More: 125 Years of ASTM International
Reproducibility is the expected variation when a test is performed at different labs on the same test specimen. Repeatability is the expected variation when a test is performed within the same lab by the same analyst within a short time period. The repeatability reported in an ASTM test is an average across labs participating in an interlaboratory study and is not specific to any one lab. Together, reproducibility and repeatability are components of test-method precision.
Intermediate precision is sometimes called site precision or laboratory precision. It is the expected variation when a test procedure is performed at one specific lab. There is a presumption that variation under intermediate precision conditions should be less than the variation under test method reproducibility conditions.
Bias
Bias is a measure of systematic error. It is the difference between the mean of test results of a reference material or test item and an accepted reference value. As mentioned earlier, the true value, or measurand, of a test is unknowable, so E177 defines an accepted reference value determined by consensus, by theoretical calculations, or by several other means of determination.
Test-method bias and test-method precision are both attributes of a test method or measurement procedure. Precision and bias are important to establish for a measurement procedure in order to evaluate whether the procedure is suitable to determine the desired quantity. Contrast this to measurement uncertainty, which is an attribute of a specific test method performed on a specific test specimen.
Conclusions
The purpose of this article is not to provide instructions on any of these topics but rather to highlight the distinctions between concepts for measurements and measurement quality.
A measurement result is usually a number expressed in decimal notation, and by itself it says nothing about the measurement quality. We should expect the measurement data to inherently exhibit some level of error, uncertainty, or imprecision. Then we can make use of several measurement quality concepts to evaluate how much error, uncertainty or imprecision comes with that expression of a measurement result.
Elliott Dick is an acoustical scientist at North Orbit Acoustic Laboratories and a member of the committees on building and environmental acoustics (E33) and quality and statistics (E11).