acceptance criteria |
An agreed upon condition or characteristic that must be present in order for a part to pass inspection. |
accuracy |
The predicted difference on average between the measurement and the true value. Accuracy is also known as bias. |
actual measured value |
The number indicated on the measuring device as the size of the dimension. Actual measured value is part of measurement value. |
analysis of variance study |
A series of measurement trials that analyzes the interaction between repeatability and reproducibility and other causes. Analysis of variance is also known by the acronym ANOVA. |
ANOVA |
The acronym for analysis of variance. |
bias |
The predicted difference on average between the measurement and the true value. Bias is also known as accuracy. |
common cause |
A fixed source variation within a system. A system may have multiple forms of common cause. |
control chart |
A graph used during SPC or MSA methods that charts data and provides a picture of how a process is performing over time. |
control limit |
A horizontal line on a control chart that represents a boundary for a process. If the process strays beyond a control limit, it is out of control. |
design specifications |
A detailed plan for a part or object that includes dimensions and other precise descriptions of its manufacturing requirements. |
gage calibration |
The comparison of a device of unknown accuracy to a device with a known, accurate standard to eliminate any variation in the device being checked. |
gage capability |
A gage's predictable range of ability, even when under the influence of natural variation due to common causes. |
gage repeatability and reproducibility |
The interaction between gage accuracy and precision. GRR helps determine the type of variation in the measuring system, isolate product variation from measuring system variation, and reduce the overall gage error. |
gage system error |
The combination of gage stability, linearity, accuracy, repeatability, and reproducibility. |
gage variation |
The difference in multiple measurements taken by the same gage under similar conditions. |
ideal gage capability study |
A series of trials in which a gage is tested under the best circumstances to verify that gage specifications can be met. Ideal gage capability studies are often performed in a laboratory with as much random variation removed as possible. |
ISO 9000 |
The quality management system based on the standard published by the International Organization for Standardization. |
linearity |
The amount of error change throughout an instrument's measurement range. Linearity is also the amount of deviation from an instrument's ideal straight-line performance. |
long form GRR study |
A series of measurement trials that offers an accurate, mathematical method of calculating GRR. Long form GRR is also known as the range and average method. |
manufacturing error |
Variation caused by the manufacturing process that affects the size of the part. Manufacturing error is part of measurement value. |
mean |
The average of a numerical set. It is found by dividing the sum of a set of numbers by the number of members in the group. |
measurement assurance |
The ability to quantify measurement uncertainty and show that the total uncertainty is small enough to meet product specifications. Measurement assurance requires a process approach similar to quality assurance. |
measurement error |
Variation caused by the measuring process that affects the measured size of the part. Measurement error is part of measurement value. |
measurement uncertainty |
The estimate of the difference between a measured value and the true value. Measurement uncertainty values are often included in the final value of a part. |
measurement value |
A combination of the actual measured value indicated by the instrument, the nominal value, manufacturing error, and measurement error. |
measurement variation |
A type of variation that occurs when the same object is measured multiple times but produces different results. |
measuring system |
The unique devices and processes used to measure and inspect a part, including the measuring device, the operator, the operator’s technique, the part, the feature being measured, the environment, and time. |
measuring system analysis |
The methods used to verify and monitor the accuracy and quality of a measuring system using statistical study of repeated tests of the gages and other parts of the system. MSA tools identify the amount of variation in the gage by isolating the measurement variation from the process variation. |
measuring system performance study |
A series of trials in which the measuring system is tested to determine the system's ability to consistently produce measurements using actual parts and realistic conditions. |
measuring system study |
A series of trials in which the measuring system is tested to determine the system's ability to consistently produce measurements. |
nominal value |
The size by which an object is known, such as a two-inch screw. Nominal value is part of measurement value. |
practical gage capability study |
A series of trials in which a gage is tested under real conditions to verify that gage specifications can be met. Practical gage capability studies are often performed in the shop with as much random variation removed as possible. |
precision |
The degree to which an instrument will repeat the same measurement over a period of time. |
process variation |
A type of variation that occurs when there are differences in multiple instances of the same process. |
random variation |
Differences that occur due to outside influences, such as temperature or vibration. Random variation can be detected and corrected through MSA. |
range and average method |
Another term for long form GRR study. |
range method |
Another term for short form GRR study. |
repeatability |
The variation that occurs among measurements made by the same operator. Repeatability is a form of random variation. |
reproducibility |
The difference in the average of groups of repeated measurements made by different operators. Reproducibility is also known as appraiser variation and between operator variation. |
short form GRR study |
A series of measurement trials that offers a crude estimate of combined GRR. Short form GRR is also known as the range method. |
stability |
The ability of a measuring instrument to retain its calibration over a long period of time. Stability determines an instrument's consistency over time. |
standard deviation |
A number representing the degree of variation within a numerical set. |
statistical process control |
The use of statistics and control charts to measure key quality characteristics and control how the related manufacturing process behaves. |
system variation |
Fixed differences that are considered a series of common causes. System variation often exists within the gage itself in the form of bias or accuracy. |
variation |
The deviation of a value from a norm or standard. |
working standard |
A measurement standard used to check or calibrate measuring instruments. Gage blocks are an example of a working standard. |
X-bar chart |
A chart used to track a series of sample averages. |