Introduction
The main goal of this paper is to provide a thorough critique for the research methodologies of two articles. The first article is titled, “Private Physicians or walk-in clinics: do the patients suffer?” and the second article is titled, “Factors affecting the performance of Individual Chain Store Units (an Empirical Analysis).”
Hise et al. (1983) and Dant, Lumpkin & Bush (1990) respectively wrote both articles. Hise et al. (1983) reported on the methodology and results of the first study, which tried to investigate the main reasons for the variations in total sales, contribution income, and return on assets for more than 132 stores.
Dant, Lumpkin & Bush (1990) used a consumer-oriented framework in the second article to establish if patients of private physicians and of walk-in-clinics differ in terms of their preference for healthcare service providers.
Their findings showed that no significant differences existed in the way both groups of patients perceived their healthcare providers.
The methodological choices of both authors manifest through their data collection, data analysis, sample choices, and statistical tools.
This study uses these tools to assess the researchers’ rationale to select data analysis methods, appropriateness of the data set, rigour in data analysis, and the appropriateness of result interpretation.
Comprehensively, these analyses inform the right framework for providing a set of recommendations that this paper provides to improve the two articles analysed.
Article 1: Private Physicians or Walk-in clinics: do the Patients Suffer?
Rationale for Selected Data Analysis Method
Hise et al. (1983) intensively relies on regression analysis as the main data analysis technique. Specifically, the author uses forward and backward step regression as the main data analysis technique.
The use of the regression analysis method improves the credibility of the data analysis method because it provides an opportunity to specify the nature of the analysed variables. Similarly, the use of the regression analysis provides an opportunity to specify explanatory factors (Beldona 2007).
These features fixate on the action theory.
The use of the regression analysis method also improves the credibility of the data analysis process because Hair, Black, & Babin (2009 p. 155) say the use of a statistically valid adjustment in the regression model provides an accurate quantitative estimate of the net effects of the factors under analysis.
Rigour in Data Analysis
In the same manner, the regression model improves the data analysis process for Hise et al. (1983); it also poses significant questions to the entire process, through its weaknesses. For example, Akinci (2007) says the regression analysis method is prone to data snooping.
For example, the regression analysis may show a strong relationship between two variables and fail to include other factors that may affect this relationship.
Indeed, Hise et al. (1983) used 18 independent variables to predict chain store performance without considering the effects of other factors that may influence the same outcome.
Certainly, issues like the demographic profiles of the employees and managerial commitment of the unit stores manifest as afterthoughts of the analysis and not the main findings supporting the entire analytical process.
This weakness manifests through the regression analysis method because even though it provides an accurate assessment of how the 18 independent variable predict chain store performance, it fails to explain how “other” variables (especially the intangible, non-statistical variables) affect the same outcome.
Akinci (2007) says that the regression method often provides a cyclical analysis of independent variables, thereby making it inapplicable sometimes. For example, the regression analysis may explain two variables (say, “X” and “Y”).
Here, “Y” would explain “X” and “X” would explain “Y”, thereby providing a close and cyclical analysis of research variables.
This analysis exposes some significant weaknesses of the data analysis method because some factors that may not manifest through the closely looped analysis may equally fail to manifest in the final research findings.
Finally, the use of the regression analysis dents the credibility of the research findings as proposed by Hise et al. (1983) because its observations must provide sufficiently contrasted evolutions for any adjustment in the findings to occur.
Albeit Hise et al. (1983) may have adopted a relatively weak research methodology, it is imperative to say that his use of the standard deviation, mean, minimum, and maximum units of correlations and variations provided useful insight into the nature of the research variables.
In fact, the use of these measures provided an accurate assessment of the quality of every variable used in the research process.
Similarly, the identification of the return on assets as the main variable for the determination of store performance provides an accurate analogy of the main causes of store success (as proposed by the store managers) (Hise et al. 1983).
Indeed, the store managers are the most reliable sources for identifying the main factors that drive the performance of individual chain stores. This variable may also manifest through the inclusion of specific factors that drive store performance (such as sales volume contribution).
Appropriateness of Result Interpretation
Through the evidence provided above, Hise et al. (1983) admits that the use of the regression model poses the problem of multi-co-linearity. Hise et al. (1983) further admits that this problem occurs when the research variables are inter-correlated.
This problem however exacerbates when the degree of inter-correlation is high. In such situations, it is difficult to obtain accurate estimates of individual statistics in the research process. Hise et al. (1983) says that this problem commonly occurs among independent variables that seem to lack any substantial explanatory power.
In an unrelated context, Zelbst (2009) says that the presence of inter-correlation between the factors under analysis poses a significant challenge to the data analysis process because the likelihood of a poor interpretation of results occurs in such circumstances.
For example, Lee (2003) says, “it is easy to interchange the regression co-efficient of two perfectly correlated variables” (p. 72). This possibility may lead to the poor interpretation of results.
Similarly, the existence of inter-correlated variables may lead to the creation of an unstable prediction model because when the co-linearity between the two variables increase, the standards of error similarly increases.
This situation creates an unstable prediction model. Lastly, the existence of inter-correlated variables may also lead to the rejection of good predictors (Lee 2003).
In other words, if there is a high level of inter-correlation between two variables, the likelihood of finding good predictors may also decline. These are the weaknesses of the research analysis process, as proposed by Hise et al. (1983).
Appropriateness of the DataSet
The main data collection tool used by Hise et al. (1983) is questionnaires. Barnes (2001) says questionnaires pose significant strengths and weaknesses of the data collection method.
Notable strengths of the questionnaires include standardisation, ease of collecting information, and the ease of collecting information from a large group of people. These advantages show that the data collection process may have been more objective, quick and encompassing (Barnes 2001).
Nonetheless, despite these inherent advantages, the use of the questionnaires may pose significant weaknesses to the data analysis process because there is a high probability that the respondents may have given superficial information, or forgotten about important details regarding the questions posed.
Indeed, since questionnaires seek to gather information regarding past events, there is a high likelihood that the information gathered may be irrelevant, or lack correct statistical significance, especially if the chain managers may have forgotten some important information.
Finally, since researchers structure questionnaires in a standardised manner, the likelihood of misinterpretation may be high, especially if the respondents are unavailable to expound on their answers. Broadly, these challenges compromise the integrity of the data collection method (Barnes 2001).
An unrelated issue that may have equally compromised the data collection process focuses on the reliance on secondary research as a source of data. Hise et al. (1983) already admitted that some of the secondary data contained missing or unusable data.
This weakness also affected the sample size of the units sampled because Hise et al. (1983) excluded 37 units from the research process because of their insufficient data.
The reduction in the sample size also compromised the credibility of the entire research process because it may have negatively affected the validity of the research findings across the entire chain store.
Indeed, Al-Omiri (2007) says that a larger sample size is more representative of the real situation, while a lower sample size is less representative of the entire research scope.
Finally, the generalisation of data (for most of the stores analysed) provided an efficient assessment of the unit stores because there was no need of conducting independent assessments of every store, while most stores stocked similar products and shared the same profiles.
Indeed, this process saved time and costs which would otherwise be associated with undertaking individual assessments for every store. Nonetheless, this methodology overshadows the possibility of identifying slight variations that may influence the performance of some of these stores.
For example, while the managerial and operational profiles of these stores may resemble, the market environments that inform the success of every store may differ. The generalisation of store operations therefore overlooks this issue.
Article 2: Factors Affecting the Performance of Individual Chain Store Units (an Empirical Analysis)
Appropriateness of the DataSet
Dant, Lumpkin & Bush (1990) greatly relied on telephone interviews as the main data collection tool. They say that they made 2,777 telephone interviews (Dant, Lumpkin & Bush 1990). Only 670 interviews were successful. The total ineligible calls made were 454.
Even though telephone interviews provide an accepted and well-structured approach to research, the over-dependence on telephone interviews significantly dented the credibility of the research process because the over-reliance on telephone interviews may have introduced a significant bias, especially for those houses that do not have a working telephone, or those households that do not have a telephone at all.
Indeed, Dant, Lumpkin & Bush (1990) admitted to this weakness by saying, even though the sample was random, the use of telephone directories clearly introduces bias due to exclusion of households with unlisted numbers, or without residential telephones.
Nonetheless, Dant, Lumpkin & Bush (1990) counters their argument by saying that there was a relatively low number of unlisted and non-telephone homes and therefore, it was difficult to register a high degree of representational bias.
They further said that if there were any representational exclusion, it would still be statistically insignificant (Dant, Lumpkin & Bush 1990). However, it is crucial to question the quality of the information sourced from telephone interviews, even if there is little representational bias.
One such question is the short nature of telephone interviews. Cachia (2011) says it is difficult to have detailed discussions about research topics, via telephone interviews. Instead, Cachia (2011) proposes that face-to-face interviews are more reliable and effective in such circumstances.
This weakness mirrors with the high level of distraction that characterises phone interviews. Certainly, Calvert (2005) says that phone interviews are highly prone to environmental distractions (on the respondent and the interviewer), thereby compromising the quality of communication.
If we infer these dynamics of the research process, as proposed by Dant, Lumpkin & Bush (1990), it is correct to say that their data may have lacked an in-depth understanding of the research problem.
Through these intrigues, it is easy to discern the view that most researchers see telephone interviews as inferior to face-to-face interviews.
The use of random sampling in the data collection process also poses significant strengths and weaknesses to the data collection process. Matheson (1996) says that the random sampling technique is the least bias sampling method and may analyse large sample populations.
However, in the same breadth, Matheson (1996) says that random sampling techniques may provide inaccurate samples, especially in areas where there is an uneven distribution of the samples. Similarly, there may be practical constraints for the collection of research information using the random sampling technique.
The use of telephone interviews however remedies this problem because it would have been difficult to collect information (physically) from a large selection of company stores located around the country.
The use of the random sampling technique therefore improved the convenience of the research process, more than it dented its credibility. In this regard, the sampling methodology, as proposed by Dant, Lumpkin & Bush (1990) was commendable.
Appropriateness of Result Interpretation
The definition of walk-in clinics and other criteria for understanding the research was a correct move that preceded the data collection process. In other words, the research paper was fixated on determining if patients differed in private or walk-in clinics.
Through this assessment, it was vital to determine which clinics were private, and which ones could be easily determined as walk-in clinics. This move was an informed decision by the researchers because it provided a proper assessment for choosing the respondents.
Stated differently, it was easier to ignore respondents who frequented other forms of healthcare facilities, besides the private and walk-in clinics described above.
In fact, Dant, Lumpkin & Bush (1990) say, “the requirement for respondents to answer specific questions about the clinics they had visited gave the researchers enough ground to anchor specific answers about their most recent response” (p. 24).
This way, it was easier for the researchers to safeguard the data collection process.
Rationale for Selected Data Analysis Method
The selected data analysis method, as proposed by Dant, Lumpkin & Bush (1990), is the multivariate analysis of variance (MANOVA) and the multiple discriminate analysis method.
Dant, Lumpkin & Bush (1990) proposed that the multivariate analysis of variance method is (best) applicable when “there are multiple interval scaled criterion variables and one categorical predictor variable” (p. 27).
Dant, Lumpkin & Bush (1990) also said that the MANOVA technique resembled other techniques such as the “univariate” analysis of variance (ANOVA) technique, but its applicability in the study of population differences superseded most other techniques.
In this regard, the MANOVA technique was highly appropriate for Dant, Lumpkin & Bush (1990) in identifying patronage characteristics for patients in private clinics and patients from walk-in clinics.
Therefore, the MANOVA technique is appropriate for identifying the differences within population characteristics, but researchers should still use other methods to evaluate why these population characteristics exist.
Rigour in Data Analysis
The data analysis process, as proposed by Dant, Lumpkin & Bush (1990), was rigorous enough to provide an accurate assessment of the main differences surrounding each population and sample patronage characteristic of the populations sampled.
For example, when evaluating the sample demographic statistics, Dant, Lumpkin & Bush (1990) mentioned the marital status of the respondents.
For example, they investigated if the respondents had children or not, the employment status of the respondents, and the educational status of the respondents (to mention a few).
Their analogy provided an accurate assessment of the demographics of the sample population. This way, it is easier to explain the reason for the differing characteristics of the populations sampled.
Recommendations
Even as this paper reviews the use of the random sampling method for both articles described above, it is still vital to appreciate that it is impractical to have enough time, energy, money and other resources to compute (correctly) the view of every respondent in a research process.
However, as Johnson (2008) affirms, it is still vital for researchers to appreciate that whatever methodological approach they chose, it ought to have a more accurate representation of the “whole.”
Similarly, they should realise that the correct sample size should provide a correct balance between obtaining statistically significant and valid research resources. Indeed, if the researchers pursue a sampling strategy that contains minimum bias, statistically valid assessments may manifest.
However, most researchers assume that parent populations have a normal distribution, thereby affirming the belief that a 95% (or more) confidence level is achievable (Matheson 1996).
This statistic may be true, but it is still vital to provide a more thorough understanding of the research sample to predict if the normal distribution prevails, or not. In other words, it is still crucial to acknowledge that up to 5% sampling may lie out of this assumption (Matheson 1996).
Conclusion
The methodological properties of the two articles analysed above provide some insightful features of the credibility of their findings. Notably, both articles described above show a great determination (on the part of the researchers) to choose the right methodologies that would suit the nature and scope of their studies.
Unfortunately, some of the methodologies chosen failed to protect the integrity of the findings. Notable weaknesses in both articles stemmed from the data collection process.
Nonetheless, it is crucial for both groups of researchers to balance the strengths and weaknesses of every methodology, instead of overly depending on a single technique for collecting data. Striking a strategic balance among the methodologies chosen may help to provide a fair and balanced representation of the findings.
References
Akinci, S 2007, ‘Where does the logistic regression analysis stand in marketing literature?: A comparison of the market positioning of prominent marketing journals’, European Journal of Marketing, vol. 41 no. 5, pp. 537 – 567.
Al-Omiri, M 2007, ‘A preliminary study of electronic surveys as a means to enhance management accounting research’, Management Research News, vol. 30 no. 7, pp. 510 – 524.
Barnes, D 2001, ‘Research methods for the empirical investigation of the process of formation of operations strategy’, International Journal of Operations & Production Management, vol. 21 no. 8, pp. 1076 – 1095.
Beldona, V 2007, ‘Regression analysis for equipment auditing’, Managerial Auditing Journal, vol. 22 no. 8, pp. 809 – 822.
Cachia, M 2011, ‘The telephone medium and semi-structured interviews: a complementary fit’, Qualitative Research in Organizations and Management: An International Journal, vol. 6 no. 3, pp. 265 – 277.
Calvert, P 2005, ‘Telephone survey research for library managers’, Library Management, vol. 26 no. 3, pp. 139 – 151.
Dant, R, Lumpkin, J & Bush, R 1990, ‘Private Physicians or walk-in clinics: do the patients suffer?’, Journal of Health Care Marketing, vol. 10 no. 2, pp. 23-34.
Hair , J, Black , W & Babin, B 2009, Multivariate Data Analysis: A Global Perspective, London, Pearson Education, Limited.
Hise, R, Kelly, P, Gable, M & McDonald, J 1983, ‘Factors affecting the performance of Individual Chain Store Units: An Empirical Analysis’, Journal of Retailing, vol. 59 no. 2, pp. 1- 18.
Johnson, C 2008, ‘Decision ’08: event marketing or product sampling?’, Journal of Consumer Marketing, vol. 25 no. 5, pp. 269 – 271.
Lee, J 2003, ‘A Canonical Correlation Analysis of CEO Compensation and Corporate Performance in the Service Industry’, Review of Accounting and Finance, vol. 2 no. 3, pp. 72 – 90.
Matheson, L 1996, ‘On sequential versus random sampling in statistical process control’, Benchmarking for Quality Management & Technology, vol. 3 no. 1, pp. 19 – 27.
Zelbst, P 2009, ‘Impact of supply chain linkages on supply chain performance’, Industrial Management & Data Systems, vol. 109 no. 5, pp. 665 – 682.