Collaborative Testing and Two-Sample Plots

When an analyst performs a single analysis on a sample the difference between the experimentally determined value and the expected value is influenced by three sources of error: random errors, systematic errors inherent to the method, and systematic errors unique to the analyst. If the analyst performs enough replicate analyses, then we can plot a distribution of results, as shown here in (a). The width of this distribution is described by a standard deviation, providing an estimate of the random errors effecting the analysis. The position of the distribution’s mean relative to the sample’s true value is determined both by systematic errors inherent to the method and those systematic errors unique to the analyst. For a single analyst there is no way to separate the total systematic error into its component parts.


The goal of a collaborative test is to determine the magnitude of all three sources of error. If several analysts each analyze the same sample one time, the variation in their collective results, as shown above in (b), includes contributions from random errors and those systematic errors (biases) unique to the analysts. Without additional information, we cannot separate the standard deviation for this pooled data into the precision of the analysis and the systematic errors introduced by the analysts. We can use the position of the distribution, to detect the presence of a systematic error in the method.

The design of a collaborative test must provide the additional information we need to separate random errors from the systematic errors introduced by the analysts. One simple approach—accepted by the Association of Official Analytical Chemists—is to have each analyst analyze two samples that are similar in both their matrix and in their concentration of analyte. To analyze their results we represent each analyst as a single point on a two-sample chart, using the result for one sample as the x-coordinate and the result for the other sample as the y-coordinate.


As illustrated above, a two-sample chart divides the results into four quadrants, which we identify as (+, +), (–, +), (–, –) and (+, –), where a plus sign indicates that the analyst’s result exceeds the mean for all analysts and a minus sign indicates that the analyst’s result is smaller than the mean for all analysts. The quadrant (+, –), for example, contains results for analysts that exceeded the mean for sample X and that undershot the mean for sample Y. If the variation in results is dominated by random errors, then we expect the points to be distributed randomly in all four quadrants, with an equal number of points in each quadrant. Furthermore, as shown in (a), the points will cluster in a circular pattern whose center is the mean values for the two samples. When systematic errors are significantly larger than random errors, then the points occur primarily in the (+, +) and the (–, –) quadrants, forming an elliptical pattern around a line bisecting these quadrants at a 45o angle, as seen in (b).

A visual inspection of a two-sample chart is an effective method for qualitatively evaluating the results of analysts and the capabilities of a proposed standard method. If random errors are insignificant, then the points fall on the 45o line. As illustrated here


the length of a perpendicular line from any point to the 45o line, shown in red, is proportional to the effect of random error on that analyst’s results. The distance from the intersection of the axes—corresponding to the mean values for samples X and Y—to the perpendicular projection of a point on the 45o line is shown in green and is proportional to the analyst’s systematic error. An ideal standard method has small random errors and small systematic errors due to the analysts, and has a compact clustering of points that is more circular than elliptical.

About Author

This entry was posted in Illustration and tagged , , , . Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *