The data analysis process will take place after all the necessary information is obtained and structured appropriately. This will be a basis for the initial stage of the mentioned process – primary data processing. It is important to analyze the results of each study as soon as possible after its completion. So far, the researcher’s memory can suggest those details that, for some reason, are not fixed but are of interest for understanding the essence of the matter. When processing the collected data, it may turn out that they are either insufficient or contradictory and therefore do not give grounds for final conclusions.
In this case, the study must be continued with the required additions. After collecting information from various sources, it is necessary to understand what exactly is needed for the initial analysis of needs in accordance with the task at hand. In most cases, it is advisable to start processing with the compilation of tables (pivot tables) of the data obtained (Simplilearn, 2021). For both manual and computer processing, the initial data is most often entered into the original pivot table. Recently, computer processing has become the predominant form of mathematical and statistical processing.
The second stage is mathematical data processing, which implies a complex preparation. In order to determine the methods of mathematical and statistical processing, first of all, it is important to assess the nature of the distribution for all the parameters used. For parameters that are normally distributed or close to normal, parametric statistics methods can be used, which in many cases are more powerful than nonparametric statistical methods (Ali & Bhaskar, 2016). The advantage of the latter is that they allow testing statistical hypotheses regardless of the shape of the distribution.
One of the most common tasks in data processing is assessing the reliability of differences between two or more series of values. There are a number of ways in mathematical statistics to solve it. The computer version of data processing has become the most widespread today. Many statistical applications have procedures for evaluating the differences between the parameters of the same sample or different samples (Tyagi, 2020). With fully computerized processing of the material, it is not difficult to use the appropriate procedure at the right time and assess the differences of interest.
The following stage may be called the formulation of conclusions. The latter are statements expressing in a concise form the meaningful results of the study. They, in a thesis form, reflect the new findings that were obtained by the author. A common mistake is that the author includes in the conclusions generally accepted in science provisions – no longer needing proof. The responses to each of the objectives listed in the introduction should be reflected in the conclusions in a certain way.
The format for presenting the results after completing the task of analyzing information is of no small importance (Tyagi, 2020). The main content needs to be translated into an easy-to-read format based on their requirements. At the same time, you should provide easy access to additional background data for those who are interested and want to understand the topic more thoroughly. These basic rules apply regardless of the format of the presentation of the information.
In order to successfully solve this problem, special methods of analysis and information processing are required. Classical information technologies make it possible to efficiently store, structure and quickly retrieve information in a user-friendly form. The main strength of SPSS Statistics is the provision of a vast range of instruments that can be utilized in the framework of statistics (Allen et al., 2014). For all the complexity of modern methods of statistical analysis, which use the latest achievements of mathematical science, the SPSS program allows one to focus on the peculiarities of their application in each specific case. This program has capabilities that significantly exceed the scope of functions provided by standard business programs such as Excel.
The SPSS program provides the user with ample opportunities for statistical processing of experimental data, for the formation of databases (SPSS data files), for their modification. SPSS may be considered a complex and flexible statistical analysis tool (Allen et al., 2014). SPSS can take data from virtually any file type and use it to create tabular reports, graphs and distribution maps, descriptive statistics, and sophisticated statistical analysis.
At this point, it seems reasonable to define the sequence of the analysis using the SPSS tools. First, it is essential to draw up a questionnaire with the questions necessary for the researcher. Next, a survey is carried out. To process the received data, you need to draw up a coding table. The coding table establishes the correspondence between individual questions of the questionnaire and the variables used in the computer data processing (Allen et al., 2014). This solves the following tasks; first, a correspondence is established between the individual questions of the questionnaire and the variables. Second, a correspondence is established between the possible values of variables and code numbers.
Next, one needs to enter the data into the data editor according to the defined variables. After that, depending on the task, it is necessary to select the desired function and schedule. Then, you should analyze the subsequent tabular output of the result. All the necessary statistical functions that will be directly used in exploring and analyzing data are located in the Analysis menu. A very important analysis can be done with multiple responses; it is called the dichotomous method. This approach is used in cases when in the questionnaire for answering a question, it is proposed to mark several answer options (Allen et al., 2014).
Comparison of the means of different samples is one of the most commonly used methods of statistical analysis. In this case, the question must always be clarified whether the existing difference in mean values can be explained by statistical fluctuations or not. This method seems appropriate as the study will involve participants from all over the state, and their responses will need to be compared.
It should be stressed that SPSS is the most widely used statistical software. The main advantage of the SPSS software package, as one of the most advanced attainments in the area of automatized data analysis, is the broad coverage of modern statistical approaches. It is successfully combined with a large number of convenient visualization tools for processing results (Allen et al., 2014). The latest version gives notable possibilities not only within the scope of psychology, sociology, and biology but also in the field of medicine, which is crucial for the aims of future research. This greatly expands the applicability of the complex, which will serve as a significant basis for ensuring the validity of the study.
References
Ali, Z., & Bhaskar, S. B. (2016). Basic statistical tools in research and data analysis. Indian Journal of Anesthesia, 60(9), 662–669.
Allen, P., Bennet, K., & Heritage, B. (2014). SPSS Statistics version 22: A practical guide. Cengage.
Simplilearn. (2021). What is data analysis: Methods, process and types explained. Web.
Tyagi, N. (2020). Introduction to statistical data analysis. Analytic Steps. Web.