Updated:

“Improving Requirements Elicitation” by Pitt & Browne Report (Assessment)

Exclusively available on Available only on IvyPanda® Made by Human No AI

Objective

To achieve a critical evaluation of the following research paper in terms of problems, theoretical propositions, experimental framework, and recommendations for improvement:

  • Pitts, M.G. & Browne, G.J., 2007, “Improving Requirements Elicitation: an Empirical Investigation of Procedural Prompts”, Info Systems J, 17, 89-110.

Summary of paper

In this IS paper, Pitt & Browne (2007) present their research findings on “procedural prompts” and their superiority in achieving Information Requirements Determination (IRD) for system task characteristics. Advocating the importance of IRD procedure on the overall success of IS projects, the authors challenge conventional “interrogatory prompts” techniques which are more commonly used to tackle these problems.

The authors believe procedural prompts offer a much wider scope to overcome cognitive task hurdles faced by analysts and users. To demonstrate the superiority of procedural prompts, the authors conduct a real-world controlled-trial experiment to validate theories related to the subject. Using a combination technique of literature sources, methodology framework, and experimentation, the authors attempt to develop new context-independent procedural prompts and test their impact in IRD efforts (Pitt & Browne, 2007).

Research Problem

The authors outright mention the importance of requirements elicitation in the overall success of IS projects. 3 stages of IRD determination are identified: information gathering, representation, and verification (Larssen & Naumann, 1992, Vitalari, 1992). In addressing the shortcomings arising due to the inadequacy of IRD efforts, the following research problem is identified:

Requirements elicitation (IRD) is subject to three fundamental cognitive challenges: 1. Limitations of humans as Information Processors 2. The complex nature of requirements and 3. The obstacles encountered in user and system interaction (Pitt & Browne, 2007). The complete list of cognitive challenges identified are Working memory (capacity and bounded rationality), long-term memory (difficulty in the recall, reconstructive nature), Availability (retention and ease of recall), Anchor and adjustment (insufficient adjustment and overconfidence), Representativeness (insensitivity to sample size) and confirmatory biases (Pitt & Browne, 2007).

It is suggested that information prompts can be used to overcome these problems and procedural prompts which are a general extension over information prompts offer a more substantive solution (Pitt & Browne, 2007). The essential characteristics of procedural prompts are summarisation and feedback, repetition and rephrasing, scenario building and elaboration, and finally, counterargument (Pitt & Browne, 2007).

In addressing research requirements, the reasoning used to state the above research problem is deep, substantive, generic, and shows the promise of solving prototypical problems which the author establishes in course of clarifying tools and techniques to be used in the experiment. There is no doubt that the problem has been articulated most lucidly however, in my capacity as a reviewer, I advocate the following “reframing” suggestions in light of research aims:

  1. The authors suggest procedural prompts offer a more substantive solution to solve IRD problems, especially when compared to information prompts. This is a major claim and should be backed by more experiments/literature evidence which the authors don’t provide. Since, the proposition is made on the strength of their experiments (“Editor’s Comments, 2003), it is better to rewrite the problem as “procedural prompts may offer a more substantive solution in comparison to interrogatory prompts”. This would overcome any inadequacies rendered in the proposition.
  2. A clear articulation of the research problem is missing in the paper – the research problem shown above is my summary derived from the paper. Many scholars suggest that problem articulation is the most important constituent of a research paper (“Editor’s Comments, 2003). Therefore, the authors need to restate the summary of research problems in their paper as shown above.

Theoretical framework

Having stated the research problem in detail, let us examine the theoretical constructs of the research paper in detail. It is widely considered that any research paper dealing with IS specifications should conform to the following research questions:

  1. Structural and Ontological Questions: A. What is the theory? B. How is the termunderstood in the discipline? C. Of what is theory composed? D. What forms do contributions to knowledge take? E. How is Theory expressed? F. What types of claims or statements can be made? G. What types of questions are addressed? (Gregor, 2006).
  2. Domain Questions: A. What phenomena are of interest in the discipline? B. What are the core problems or topics of interest? C. What are the boundaries of the discipline? (Gregor, 2006)

To answer for above research questions posed in this paper, the author’s research has been examined as per the above theoretical constructs. Every research question is of significance in interpreting matters relevant to our present discussions. Here is a semantic analysis of the paper’s theoretical constituents.

1A

Pitt and Browne (2007) advocate the following theory: “procedural prompts are superior to interrogatory prompts when it comes to IRD efforts in IS projects”.

1B + 1C

In the present discipline, the terms interrogatory prompts are understood in terms of a diverse array of “questioning” techniques whereas procedural are more of “summarisation and feedback”. Examples of interrogatory prompts may be “Who is involved with the system?”, “What kinds of things they do?”, “How they do it?”. Procedural prompts in comparison consist of the following “rephrased questions”, “Can you summarise the features?”, “Tell me again what are the important constituents of the System” (Pitt & Browne, 2007).

1D + 1E

Pitt and Browne (2007) choose to identify the following explanation of procedural prompts which is closely tied with the research methods undertaken in the experiment: Summarisation and feedback which essentially consists of steps and procedures to overcome working memory limitations, repetition, and rephrasing which is meant to improve recall and reconstruction, scenario building and elaboration which thrive on mental imagery and richer analysis of results thereby giving a much better perspective, counterargument the use of which is identified to achieve real progress in improving the inadequacy of IRD efforts (Pitt & Browne, 2007).

1F

The following claims can be made based on the above theory (it also contains my recommendations):

  1. The theory gives a high degree of a cause-effect relationship between various constituents of procedural prompts. This is used to achieve a platform on which these relationships may be investigated later (Gregor, 2006). More ground should be covered using ca
  2. The theory gives a set of testable propositions (hypotheses) which are further used to advance claims made by authors. This can be further improved by mentioning a set of interrelated variables which allow for a better understanding of procedural prompts’ applications in a present problem that have been covered in literature e.g. their role in getting better responses from samples of analysts-users (Gregor, 2006).

1G + 2A + 2B

Since Pitt and Browne (2007) follow a hypothesis approach to understanding the issues at hand, the following research questions are addressed: practical benefits such as providing structure and focus, eliciting more complete information, increasing discoveries of unique alternatives and information (Pitt and Browne, 2007); communication between analysts and users such as better mutual understanding for various situations involved in the problem.

2C

To clarify boundaries in research, the following set of research hypotheses have been identified to convey a meaningful statement of the relationship between different theoretical constructs under investigation. The relationship between constructs is that of a cause-effect one (Gregor, 2006)

  • Hyp 1: The number of requirements elicited will remain the same both before and after the procedural prompting elicitation (Pitt & Browne, 2007)
  • Hyp 2: The completeness of requirements elicited will remain the same both before and after the procedural prompting elicitation (Pitt & Browne, 2007).
  • Hyp 3: The number of requirements elicited using procedural prompts will remain the same as the number of requirements elicited using interrogatory prompts (Pitt & Browne, 2007).
  • Hyp 4:: The completeness of requirements elicited using procedural prompts will remain the same as the completeness of requirements elicited using interrogatory prompts (Pitt & Browne, 2007).

Thus, based on the above structural/ontological and domain questions, Pitt and Browne’s (2007) assessment of the given research question can be considered well-satisfying for our present objectives. The underlying motive of using the above techniques was to showcase the cause-effect relationship between different variables connected with the present theory. The experiment followed after this summarises essential themes discussed in the theoretical section. To confirm and validate findings, the experiment is analyzed using subjective criteria in literature sources.

Experiment

To test and validate theoretical parameters connected with procedural prompts, Pitt and Browne (2007) have done the following experiment which is assessed according to scientific criteria as shown below:

Experiment Design, Tasks, Control, Context, and Validity Issues

The experiment consisted of a case scenario for the development of an online grocery shopping IS. Participants were drawn from 54 practicing organizations (sample size 54) consisting of 64.8% men and 35.2% women with a median age of 42.3 years (Pitt & Browne, 2007). The sample consisted of respondents who had a minimum of 11 years of experience in various capacities as system analysts for grocery stores. A person “blind” to the aims of the research was chosen as “user” (who was a grocery store manager –this proves core validity constructs) (Pitt & Browne, 2007).

The user had received a prior briefing on the questionnaire (sample of 20 questions) which would be later used to assess his responses using both “procedural” and “interrogatory” techniques (Pitt & Browne, 2007). The sample of 54 analysts was given training on applying both techniques to elicit a response from the user. Quantity and completeness of requirements were the underlying motives in eliciting a uniform set of responses from the users (Pitt & Browne, 2007).

Research designs are considered inappropriate when they do not solve an important problem in the field and lack various forms of experimental control (Jarvenpaa, 1985). As can be interpreted from the following research design, the experiment was double-blind (users and analysts were unaware of each other’s intentions), had a singular intention (to elicit a uniform set of responses), and was controlled (user-analyst teams were trained in advance for specific responses).

Internal and External Validity

To understand the main validity issues for conducted research, it is useful to ponder over the basic statement of validity – internal validity indicates the situation when there is a direct cause-effect relationship between various variables at hand (Jarvenpaa, 1985; Straub, 1989). External validity indicates the situation where the variables confirm the aims and objectives of the research (Jarvenpaa, 1985).

In this paper, Pitt & Browne (2007) have performed credible analysis for internal validity because they chose an “internal” system consisting of an end-user and a selected sample of analysts which indicates a direct cause-effect relationship between various study systems. As far as external validity is concerned, Pitt and Browne (2007) confirm the importance of validating theoretical aims described earlier in this report.

Validity Construct

Validity construct refers to the situation in which measures show stability across different methodologies (Straub, 1989). In the present research, the following methodologies were identified: protocols for every requirements elicitation session which consisted of a series of goals, processes, tasks, and information tools (Pitt & Browne, 2007). To assess validity construct, Pitt and Browne (2007) have calculated means and standard deviations for quantity, breadth, and depth of requirements in given study of requirements elicited using both interrogatory and procedural prompts methods.

Reliability analysis

Reliability refers to the situation in which measures are expected to show stability across different units of observation (Straub, 1989). In the present research, no such reliability analysis was undertaken because the sample size of 54 analysts with one user was derived on one-on-one interview procedures (Pitt & Browne, 2007). This is perhaps an important limitation of present research and should be overcome in the future using appropriate reliability analysis metrics.

In summary of the above findings, the experiment conducted by Pitt & Browne addresses requirements for most experimental criteria outlined in present theoretical discussions (barring the major disadvantage of not matching reliability data across the diverse spectrum). The results and findings of the given research confirm the basic assumptions set out in the given study and meet standard scientific experiment criteria such as controlled experimental design, proper tasks enumeration, internal and external validity, and validity construct for the given experiment.

Summary

In light of theoretical and practical (experimental) evidence derived from scientific sources, I choose to address the strengths and weaknesses of the present research study. Also, to summarise key recommendations made in this evaluation sheet.

Strengths

  1. The ability to perform a double-blind controlled experiment in light of theoretical discussions is the biggest strength of the research. It addresses the fact that the authors are confident about the scope, validity, and implementation of research at a later juncture.
  2. Clarifying clear-cut benefits of research using a set of hypotheses (see 2C). The practical implementation of key recommendations is crystal-clear from an independent point of view.
  3. Pitt and Browne (2007) have calculated means and standard deviations for quantity, breadth, and depth of requirements in the IRD assessment sheet which indicate stability across different validity paradigms.
  4. This study fulfills the requirements of statistical constructs validity (in light of the above point).

Weaknesses/Recommendations

  1. A clear articulation of the research problem is missing in the paper – the research problem shown above is my summary derived from the paper. Many scholars suggest that problem articulation is the most important constituent of a research paper. The authors must follow a fresh rewriting to allow for a clear presentation of the research problem as re-written by me (See the section on Research Problem).
  2. The data used in experiments lack reliability analysis for a diverse array of situations. Pitt and Browne (2007) have only addressed the scope of one-on-one communications between the user and system analyst. To fulfill the requirements, the author may conduct a diverse range of experiments such as invoking procedural prompts for diverse groups of user-analysts and repeat the experiment for “masked” and “open” control between users and analysts. Also, the experiment can be made random-controlled (RCT) by choosing random participants for future discussions.

References

Editor’s Comments, 2003, “The Problem of the Problem”, MIS quarterly, Vol. no. 27 No.1, iii-ix.

Gregor, S., 2006, “The Nature of Theory in Information Systems”, MIS Quarterly, Vol. no.30, No.3, 611-642.

Jarvenpaa, S.L., 1985, ‘Methodological Issue in Experimental IS Research”, MIS Quarterly.

Larsen, T.J. & Naumann, 1992, “An Experimental Comparison of Abstract and Concrete Representations in Systems Analysis”, Information and Management, 22, 29-40.

Pitts, M.G. & Browne, G.J., 2007, “Improving Requirements Elicitation: an Empirical Investigation of Procedural Prompts”, Info Systems J, 17, 89-110.

Straub, D.W., 1989, “Validating Instruments in MIS Research”, MIS Quarterly.

Vitalari, N.P., 1992, Structuring the Requirements Analysis Process for Information Systems: A Propositional Viewpoint, In Challenges and Strategies for Research in Systems Development, Cotterman, W.W. & Senn, J.A. (eds), 163-179, Wiley & Sons, New York,NY.

More related papers Related Essay Examples
Cite This paper
You're welcome to use this sample in your assignment. Be sure to cite it correctly

Reference

IvyPanda. (2021, October 5). “Improving Requirements Elicitation” by Pitt & Browne. https://ivypanda.com/essays/improving-requirements-elicitation-by-pitt-amp-browne/

Work Cited

"“Improving Requirements Elicitation” by Pitt & Browne." IvyPanda, 5 Oct. 2021, ivypanda.com/essays/improving-requirements-elicitation-by-pitt-amp-browne/.

References

IvyPanda. (2021) '“Improving Requirements Elicitation” by Pitt & Browne'. 5 October.

References

IvyPanda. 2021. "“Improving Requirements Elicitation” by Pitt & Browne." October 5, 2021. https://ivypanda.com/essays/improving-requirements-elicitation-by-pitt-amp-browne/.

1. IvyPanda. "“Improving Requirements Elicitation” by Pitt & Browne." October 5, 2021. https://ivypanda.com/essays/improving-requirements-elicitation-by-pitt-amp-browne/.


Bibliography


IvyPanda. "“Improving Requirements Elicitation” by Pitt & Browne." October 5, 2021. https://ivypanda.com/essays/improving-requirements-elicitation-by-pitt-amp-browne/.

If, for any reason, you believe that this content should not be published on our website, please request its removal.
Updated:
This academic paper example has been carefully picked, checked and refined by our editorial team.
No AI was involved: only quilified experts contributed.
You are free to use it for the following purposes:
  • To find inspiration for your paper and overcome writer’s block
  • As a source of information (ensure proper referencing)
  • As a template for you assignment
1 / 1