To determine whether the average data for two groups is different statistically, researchers have to look at the group mean in relation to the variability of the data. The t-test examines the ratio of the data groups analyzed in relation to each other.

The t-test allows researchers to differentiate a clear signal of statistical change and an appearance of similarity that otherwise does not count as significant change. A huge variability of data increases the difficulty of noticing statistical differences, whereas a mean difference serves as a signal for a statistical significance in the two groups of data.

Directional hypothesis indicate whether the alternative hypothesis value is significantly larger or less than the null hypothesis value. Non-directional hypotheses indicate that the means of two samples are different; however the hypotheses fail to indicate the direction of the difference.

Researchers use two-tailed tests to analyze a non-directional hypothesis. Under a non-directional hypothesis testing, results are interesting in both directions of the mean. The null hypothesis is a specific value and the alternative hypothesis only states that the results will be different. The researcher therefore has to consider two possible directions, the negative and the positive to the null hypothesis value.

Using the formula of the t-test, one obtains a positive value when the first mean is larger than the second and a negative t-test value when the second mean is larger than the first. The difference obtained by the t-test might be because of chance. To clarify whether this is true, the researcher looks up the t-test value in a table of significance specifically to check whether the ratio (t-test value) is larger than a given value for chance.

Under directional hypothesis, when using a one-tailed t-test, statistical significant table values provide a t-test value that cuts off the entire alpha value. On the other hand, for non-directional hypothesis, using the two-tailed t-test, only half of the alpha values are cut off from each end of the null hypothesis value.

To increase the accuracy of the t-test, a risk level is set to indicate how many times you are likely to encounter a statistical difference that is significant out of chance when in reality there is none. In most cases, this risk level also known as alpha level is set to 0.05 implying that chance would account for five out of one hundred tests.

A researcher will also need the degrees of freedom, which is the sum of the number of data values in all the groups after a subtraction of two. The three values namely, the alpha value, the degrees of freedom value and the t-test value form a complete set for looking up in a standard table of significance. When the look up indicates that the t-test value is significant, it serves as an indication that the two means of the data groups are different.

In hypotheses testing where the standard deviation is unknown for a given sample, researchers have to use an estimate. In such cases, a t-test comes handy, compared to a z score test because it includes a formula for determining the standard deviation.

Another way to look at it is that the t-test is used for normal bell –shaped distributed sample means while the z score is used for actual means and variance of the population. When there is only one sample for testing the hypothesis then a single t-test suffices.

However, in the case of two sets of data values, then a paired t-test is appropriate. In a paired t-test analysis, individual t-tests occur for each data set and then the researcher performs a single sample t-test on the differences obtained.

A key point to consider is that the same way of obtaining the difference has to hold for each pair of data values. Researchers use the paired t-test to test hypotheses on related samples. When two samples are unrelated, an unpaired t-test suffices for testing the hypothesis.