Earlier in this section you saw how to perform a *t*-test to compare a sample
mean to an accepted value, or to compare two sample means. In this section, you will see
how to use the *F*-test to compare two variances or standard deviations.

When using the *F*-test, you again require a hypothesis, but this time, it is to compare standard deviations. That is, you will
test the null hypothesis *H*_{0}: *σ*_{1}^{2} = *σ*_{2}^{2} against
an appropriate alternate hypothesis.

You calculate the *F*-value as the ratio of the two variances:

where *s*_{1}^{2} ≥ *s*_{2}^{2}, so that *F* ≥ 1. The degrees of freedom for the
numerator and denominator are *n*_{1}-1 and *n*_{2}-1, respectively. As with the *t*-test,
you compare *F _{calc}* to a tabulated value

*F*, to see if you should accept or reject the null hypothesis. As well, you can perform 1- or 2-tailed

_{tab}*F*-tests. The following two examples illustrate the use of 1- and 2-tailed tests.

### Example 1

As an example, assume we want to see if a method (Method 1) for measuring the arsenic concentration in soil is significantly more precise than a second method (Method 2). Each method was tested ten times, with, yielding the following values:

Method | Mean (ppm) | Standard Deviation (ppm) |

1 | 6.7 | 0.8 |

2 | 8.2 | 1.2 |

A method is more precise if its standard deviation is lower than that of the other method. So we want to test the null hypothesis
*H*_{0}: *σ*_{2}^{2} = *σ*_{1}^{2}, against the alternate hypothesis
*H*_{A}: *σ*_{2}^{2} > *σ*_{1}^{2}.

Since *s*_{2} > *s*_{1}, *F _{calc}* =

*s*

_{2}

^{2}/

*s*

_{1}

^{2}= 1.2

^{2}/0.8

^{2}= 2.25. The tabulated value for d.o.f.

*ν*= 9 in each case, and a 1-tailed, 95% confidence level is

*F*

_{9,9}= 3.179. In this case,

*F*<

_{calc}*F*

_{9,9}, so we accept the null hypothesis that the two standard deviations are equal, and we are 95% confident that any difference in the sample standard deviations is due to random error. We use a 1-tailed test in this case because the only information we are interested in is whether Method 1 is more precise than Method 2.

### Example 2

If we are not interested in whether one method is better compared to another, but were simply trying to determine if the variances of
were the same or different, we would need to use a 2-tailed test. For instance, assume we made two sets of measurements of ethanol concentration
in a sample of vodka using the same instrument, but on two different days. On the first day, we found a standard deviation of *s*_{1}
= 9 ppm and on the next day we found *s*_{2} = 2 ppm. Both datasets comprised 6 measurements. We want to know if we can combine the two
datasets, or if there is a significant difference between the datasets, and that we should discard one of them.

As usual, we begin by defining the null hypothesis, *H*_{0}: *σ*_{1}^{2} =
*σ*_{2}^{2}, and the alternate hypothesis, *H*_{A}: *σ*_{1}^{2}
≠ *σ*_{2}^{2}. The "≠" sign indicates that this is a 2-tailed test, because we are interested in both cases:
*σ*_{1}^{2} > *σ*_{2}^{2} and *σ*_{1}^{2} <
*σ*_{2}^{2}. For the *F*-test, you can perform a 2-tailed test by multiplying the confidence level P by 2,
so from a table for a 1-tailed test at the P = 0.05 confidence level, we would perform a 2-tailed test at P = 0.10, or a 90% confidence level.

For this dataset, *s*_{2} > *s*_{1}, *F _{calc}* =

*s*

_{1}

^{2}/

*s*

_{2}

^{2}= 9

^{2}/2

^{2}= 20.25. The tabulated value for

*ν*= 5 at 90% confidence is

*F*

_{5,5}= 5.050. Since

*F*>

_{calc}*F*

_{5,5}, we reject the null hypothesis, and can say with 90% certainty that there is a difference between the standard deviations of the two methods.

Tables for other confidence levels can be found in most statistics or analytical chemistry textbooks. Be careful when using these tables,
to pay attention to whether the table is for a 1- or a 2-tailed test. In most cases, tables are given for 2-tailed tests, so you can divide
by 2 for the 1-tailed test. For the *F*-test, always ensure that the larger standard deviation is in the numerator, so that *F* ≥ 1.