According to Bolton, “a computer program is a set of instructions for a computer to perform a specific task” (2012, p.1). Data processing is the process by which information is obtained from data (Articlebase.com, 2012, p. 1). There are four main methods used by a computer program (or application) to process data. These are batch, online, real-time and distributed processing.
In the batch method, a program begins to process data once it has been fully collected and organized into a batch (Jones, 2009, p.8). An example where this method of data processing is particularly effective is in computerized payroll cheque processing programs. In such, it is imperative that the program has enough data as this ensures correct debiting and crediting.
In the online processing method, processing of data takes place as it is input into the program, that is, unlike in batch processing it does not wait for the data to be organized into a batch (Jones, 2009, p.14). Thus, the computer program responds immediately to the data being input. Examples where online data processing is applied include booking systems for hotels and word-processing programs.
In real-time processing, as with online processing, data processing takes place as the data is input into the program (Jones, 2009, p.20). The difference, however, is that the processing has to complete in time because the output it produces affects the next data input to the program (Jones, 2009, p.3). An example where real-time processing is effective is in patient monitoring programs.
In distributed processing, data processing is done on more than one computer typically on a server and remote workstations (Dephoff, 2012, p.4). An example where distributed processing is effective is in ATM applications.
Having discussed the methods above this paper argues that the methods used to process data in a program do not change as the quantity of data increases.
The reason for adopting this argument is that if the methods of data processing changed then there would be a high risk of programs becoming unreliable and inefficient in carrying out their tasks. To discuss the logic behind this argument we will consider two cases in which we will investigate the effect of changing the data processing method.
The first case is that of a computerized payroll program. To get an accurate payroll, data has to be collected over a specified period and processed as a whole. This is why the batch processing method is preferred in computerized payroll programs. Let us assume that online processing is used instead of batch processing.
The payrolls produced in this case will be inaccurate since online processing does not support collection of data over time and thus, computerized payroll systems will be unreliable.
In the second case, we consider a patient monitoring system that is being used to administer a drug to a patient when factoring in the patient’s heart bit rate. The data processing solution required in this case is the one that provides the program with real-time output so that it can determine if changes in dosage are needed.
Real-time data processing is apt in this case. Now if for instance batch processing was used in this case instead of real-time processing the consequences will be fatal since batch processing does not give real-time outputs. Thus, in such a case the patient monitoring system becomes unreliable.
To avoid program unreliability and inefficiency program designers do not design data processing methods in a program to change as data quantity increases. This paper therefore concludes that the methods used to process data in a program do not change as the quantity of data increases.
References
Articlesbase. (2012). Definition of data processing. Web.
Bolton, D. Definition of program. About.com. Web.
Dephoff, J. (2012). Methods of data processing. Web.
Jones, R. (2009). Types of processing. ib-computing.com. Web.