Sampling methods
Sampling methods are divided into two groups – probability sampling and non-probability sampling. The latter is not recommended for research due to potential errors and bias. Probability samplings are divided into 5 subgroups (“Simple random sampling,” n.d.): Simple random samplings, stratified samplings, cluster samplings, and systematic samplings.
- Simple random sampling. In this sampling method, each member of a stratum has an even probability of being chosen. An example of this method would be choosing five students out of 30 by drawing their names out of a hat.
- Stratified sampling. As the name implies, this method suggests taking samples from every population subgroup within a particular stratum.
- Cluster sampling. This method suggests dividing the stratum into separate groups at random and then taking random samplings from every group. Example: Geographical cluster sampling.
- Systematic sampling. Samples are selected from a random starting point, at equal intervals. Example – sampling every 5th student in the group.
Z-tests and T-tests
Z-tests and T-tests are widely used in statistics and can be applied to almost any subject of study, from business to mathematics, sociology, and science. These tests are used to test certain hypotheses and prove or disprove them through statistical means. Z-tests allow the researchers to compare population mean to the sample. T-tests, on the other hand, allow to analyze two population means through statistical examination. Z-tests are prevalent over T-tests in research and study for several reasons. First, T-tests can only be applied to samples with sizes of 30 or less. Z-tests are preferable for larger samples, and research favors large samples due to greater amounts of variation. However, Z-tests require knowing standard deviations. If there is no data on population standard deviation, then a T-test is preferable. To summarize, T-tests should be used only when the sample size is below 30 and when there is no data on population standard deviation (Botts, n.d.).
The alpha signifies
Significance levels ought to be determined prior to the start of the research. The alpha signifies the maximum level of risk at which the null hypothesis could be rejected or disproved (Frost, 2015). The standard level of significance for most researches is 0.05, which is, however, a subject to change. In general, the larger level of significance (0.1) is picked in order to detect any possible variation.
For example, when testing the stability of ball bearings in an automobile, it is better to choose a larger alpha (0.1), as it will allow the researchers to detect greater variations in stability. On the other hand, when testing something very important and sensitive, like a pharmaceutical product, for example, the researchers might want to pick a smaller alpha (0.01) in order to ensure that the drug does significantly reduce all symptoms of the disease, before making a bold advertising claim.
ANOVA
ANOVA, also known as one-way analysis of variance, is a kind of statistical test that is used to compare the differences of means among more than 2 samples (Hindle, 2013). When comparing two groups, we use a simple T or Z test. However, it is harder to do when there are multiple groups. ANOVA helps compare variation between groups and within groups as well. It is very popular in different kinds of settings and samples. When we take samples from a population, it is very likely that they would differ due to chance. However, we also expect that every result will not be too different from a general mean. ANOVA helps answer the question of the possibility of the difference among groups being greater than the norm to be caused by chance. Or, in other words, is there a possibility of a real difference existing in the population mean?
A statistical interaction
A statistical interaction happens when one independent variable in the research has an effect on the dependent variable that is different among levels of another variable (“Interaction effects, n.d.). To demonstrate this in a patient population, let us assume a hypothetical situation of a drug test. There are two groups of patients, one of which would be given a drug against stomachache, and the other one – a placebo drug. Each group is comprised of men and women. The dependent variable here is the level of pain. Independent variables are the amount of medication and gender. Here is a hypothetical table for pain levels:
If any of the independent variables were ignored, then the results would be distorted – we would simply assume that the drug does not work, or that there is no statistical difference between male and female population. However, since we are analyzing the interaction between factors, we could clearly see that the drug helps reduce pain levels in men, but at the same time increases pain for women. The interaction presented in this example is called a “pure interaction,” because there is no main effect, but only one interaction. Analyzing statistical interactions is an important part of the analysis, as without it the results could be misinterpreted.
References
Botts, V. (n.d). Z-test & T-test: Similarities and differences. Web.
Frost, J. (2015). Understanding hypothesis tests: Significance levels (alpha) and P values in statistics. Web.
Hindle, A. (2013). Statistics: ANOVA explained.
Interaction effects in ANOVA. (n.d.). Web.
Simple random sampling and other sampling methods. (n.d.). Web.