Introduction
Statistic usage in scientific and medical journals has been a subject of review to many scholars over the recent past. Various scholarly journals have published systematic reviews of statistical techniques used in research. The reviews indicate significant room for improvement. More than half of researches published in academic journals contain some form of statistical errors. Although various errors are often recorded, the common ones include documentation error of statistical techniques employed and improper choice of statistical technique used in testing of hypothesis. This article evaluates the use of statistics in testing the effect of hot air in combination with Pichia guilliermondii on post-harvest anthracnose rot of loquat fruit.
Main body
The experiment set out to investigate how hot air combined with Pichia guilliermondii affects post-harvest anthracnose rot of loquat fruit. To achieve this, a number of parameters were investigated including heat treatment effect and P. guilliermondii on loquats quality, on loquats decay, on artificially inoculated infection, on spore germination and mycelial growth of C. acutatum, on SOD and CAT activity, and H2O2 contents, on PAL, POD and PPO activity, and lignin content and on activity of β-1, 3-glucanase. It is important to note that no clear hypothesis is illustrated at the onset of the experiment. However a pre-defined condition that will help shape the findings is produced whereby for each case a value of P < 0.05 was considered statistically significant. The statistical approach employed in this experiment included the one-way analysis of variance (ANOVA).
One-way variance analysis (ANOVA) is applicable in determination of whether a given factors e.g. drug treatment regime significantly affects another factor e.g. gene expression across various study groups. In such instance, a significant p-value derived from one-way ANOVA will show that gene is expressed differentially in at least one of the groups under evaluation. In case of more than one group being analyzed, one-way ANOVA fails to specifically indicate which group pair exhibits yields statistical differences. In such cases, post Hoc tests are applicable in determination of the specific pairs which are differentially expressed. In this experimental analysis, the effect of two factors on other parameters was tested.
A closer critique of the statistical approach begins with the tool itself. Whereas one-way variance analysis is used in measurement of the significance of the effects of a single factor, two way variance analyses facilitates measurement of the effects of two factors/variables simultaneously. That article under evaluation investigated the effect of two factors acting simultaneously. Using one-way ANOVA therefore fails to acknowledge the presence of two factors simultaneously acting. Using two-way ANOVA would have allowed generation of the interaction between the two parameters. Three values are generated whereby two represent each parameter independently while the third one measures the level of interaction between the two.
The experimental design used in the experiment being evaluated comprised of a total 0f 80 fruits group into four. Each group comprised of 20 fruits. The sample although relatively small is somehow representative of the overall population. The groupings further allow variation of the test variables. Further it should be noted that adequate experiment time was provided in order to record the varying changes occurring within this duration. Generally the experimental design provided adequate time fro drawing of conclusions.
On the other hand, it is important to mention that errors in analysis often result into data misinterpretation and drawing of faulty conclusions. Commonly occurring error is non-adjustment or failure to account for errors arising from comparison multiplicity. When more than a single experimental group is used and compared against each other, there is a necessity for adjustment of the level of significance (P value) in order to cater for multiple comparisons. The researcher used a significance level of 0.005. In this case, the overall error rate would be given by 1- (1 – 0.05) k, where k represents the number of comparisons. In case more treatment groups or time points are included, then the probability of false significance rises.
Deciding the necessity and approach of making adjustments for multiple comparisons, require that a number of factors are considered. These include prior planning, number of comparisons, and the design of study. Bonferroni adjustment offers an example of simplified adjustment. It involves multiplication of the P values observed to the number of comparisons under consideration. This gives rise to a necessity to state existence of significant differences between research groupings and hence a need for accompanying corresponding significance tests. Even for circumstances where observed differences are large amongst groupings, the possibility of them being by chance can not be wished away. Statistical analysis is the only sure way to ensure that the differences observed are not solely by chance.
Two other common types of errors although not common enough. When multiple sources are taken from same experimental sources, statistics application on assumption of independence may be inappropriate. The study failed to avail adequate information to determine whether or not a test for independent or dependent variables was necessary. Researcher should indicate instances where multiple samples are acquired from same sources and hence use appropriate statistical tools and techniques. For various independent experiments, there is need for a summary of each experiment or consider bringing together data to a single representative experiment. The author therefore should have considered using block ANOVA to account for experimental variability. Despite a few points worth criticizing, the general presentation of the results is appealing. The author as used a number of graphs to illustrate his results therefore presenting a clear picture to the findings.
Reporting errors are however non-indicative of incomplete or inappropriate statistical analysis. Errors are a result of failure in describing variability of study sample. This is an error that the research under evaluation has successfully evaded by adequately providing variability measures.
Various tests including the t-test bear a number of variants and therefore there is need for researchers to articulate the applicable variance. When t tests are used, researchers need to indicate if paired samples version or independent samples version have been used. Additionally, clear assertions should be made as to whether the applied tests are one-sided or two-sided alongside the chosen level of significance as is the case of the evaluated research where it was indicated that for P values less than 0.05 are considered significant. In the case involving independent-sample test, it should be stated whether the whether group’s variances under comparison are assumed to be equal. However, at times it would be sufficiently enough to only cite the test, the chosen significance level and whether the test approach s one sided or two sided
It is also of great importance that researchers decide the statistical analysis strategy prior to commencement of the experiment. It is tempting to attempt various statistical analysis strategies and approaches and this often increases the chances of false reporting. This is also another area where the research under evaluation has successfully achieved. From the initial point, a clear illustration of the statistical approach to be used is well articulated. The researcher has effectively singled out ANOVA as the statistical tool applied in the research. He has gone further to state how data was collected and ultimately prepared, evaluated and ultimately conclusions drawn. The test significance level is well stated as 0.05 and hence taking into consideration any rational errors which may be incurred during the experiment. The statistical findings recorded are rather straightforward and offer direct comparison of the groups under study.
Conclusion
In summary, despite the statistical report in this case being straightforward and directly comparing groups, the effect of errors need not be ignored. Recognizing and appropriately attending to the errors should successfully assist in choice of appropriate statistical methods and correctly use the same. This would enhance both the validity and the quality of the research. Generally, though, it must be mentioned that the researches usage of statistics is rather effective and the researcher has been able to successfully display his intended results. The researcher elaborates the steps involved in setting up the experiment, the statistical tools used in analysis, the analysis approach adopted and finally the findings. The paper is organized systematically and despite a few errors discussed earlier, it’s a general ideal application of statistics in research. The research has achieved much of its objective. It has been able to clearly answer the question it intended to.