## Data Evaluation and Comparisons:

So far, this tutorial has focussed on statistical parameters (such as mean and standard deviation) and linear regression as applied to instrument calibration. In this section, we will look at methods for evaluating specific results. For example, you might want to know if the concentration obtained by a particular method agrees with the true concentration in order to establish whether or not the method was biased. Or, you might want to find out if two different methods yielded similar results or if one was more accurate or precise than the other. Such questions can be addressed using various significance tests, some of which have been mentioned elsewhere and are described in more detail in this section.

### Section Outline:

Hypotheses
→   Definition of statistical hypotheses about datasets
t-tests
→   t-tests for comparing the means of different datasets
One- & Two-tailed tests
→   Testing whether a mean is greater than, less than, or not equal to, another mean
F-test
→   Testing differences between standard deviations of datasets, for comparing precision

### General Procedure:

Whatever question you wish to answer, and whatever test you wish to apply, there are certain common steps that should be followed. These are summarized graphically in the concept map, and also in the following list:

1. Decide which test to perform - either a test for outliers in replicate data, a comparison of one or two mean values, or a comparison of standard deviations
2. Define a relevant statistical hypothesis for significance testing
3. Choose a confidence or significance level for your test (e.g. a 95% CL, α = 0.05)
4. Determine if the test should be 1- or 2-tailed
5. Determine the number of degrees of freedom
6. Compute the test statistic using the appropriate formula
7. Either compare the test statistic to the relevant critical value, or calculate the probability p associated with obtaining your test statistic value to determine if p < α