Business Simulation Exercise’s Effect on Teamwork Dissertation

Exclusively available on Available only on IvyPanda® Made by Human No AI

Introduction

Simulations have been used as early as 1910 as a means to train both individuals and teams to reduce errors and improve safety (Fowlkes, Dwyer, Oser 1998:209-221). Commercial aviation and the military have invested heavily in the use of simulation- based training because it offers a realistic, safe, cost-effective, and flexible environment in which to learn the requisite competencies for the job ( Salas 2001: 641-674). Given its professed success in these areas, the use and application of simulations as a training tool has spread to a number of other domains, such as business, education, and medicine (Jacobs, Dempsey 1993: 197-228).

Although the popularity of simulation-based training has grown during the past decade, using simulation as a part of training is not a panacea. A review of the team training literature in health care, which we conducted in 2004 (Salas 1999: 123-161), showed, for example, that simulation-based training is used to improve team performance (Howard 1992: 763–770). However, it appeared that simulation-based training programs early on either focused on the engineering component of training (that is, the simulator itself) or took a more balanced approach in which simulation is studied in the context of a learning methodology.

Yet research indicates that simply adding simulation to training does not make it more effective nor will it guarantee that trainees learn more, better, or the right things (Salas 1999: 123-161). Rather, simulation-based training must be designed and delivered on the basis of what we know about the science of training and learning (Salas, Cannon-Bowers 2001: 471–499).

Simulation-based training provides opportunities for trainees to develop requisite competencies through practice in a simulated environment that is representative of actual operational conditions; trainees receive feedback related to specific events that occur during training (Oser et al. 1999: 175–202). There is a wide array of simulation types that can be used to train teams ( Beaubien, Baker 2004: 51–56). Simulations can range from low-fidelity role playing exercises (for example, an event/scenario is reenacted) to part task trainers to high-fidelity full-motion simulations.

Validated simulation promotes greater depth of understanding and higher levels of retention while promoting the development of stronger critical thinking and analytical skills and generate enthusiasm for learning.

The aim of this paper is to describe the effects of the use of simulation exercises on teamwork. Observations regarding the historical developments are linked to the current state of the literature and the implications of these observations for future direction of simulation literature are briefly discussed. The research will look at learning and motivational theories, and how theories impact on knowledge gained during the simulation exercise and its effect on teamwork. The pedagogical value of validated simulation will be analyzed and learning opportunities and factors influencing and affecting teamwork will be outlined.

Attention to transferable skills or ‘deep learning’ such as working and relating to others, communication, problem solving has intensified and will be referred to in an attempt to outline the effects of simulation use on participants and their teamwork.

The paper is divided into three sections. The first section will review the current literature regarding papers supporting simulation as an effective way of developing teamwork together with papers evidencing a diminished or reversing effect of validated simulation on teamwork. It will have a literature review on the research done so far in this particular field and build a foundation for research. Second section will outline the methodological procedure which has been employed to undertake the research along with the hypothesis.

This section will also discuss the research outcomes. The third and the last section will try and relate the findings with the available literature. This will include the limitations of the study and concluding remarks.

Literature Review

This section will undertake a detailed study of effective training in team building and contribution of simulation in the area. We will study the research which show that validated simulation is an effective method of team building along with those which have shown no or reverse effect on the same. First we will try and understand how team building exercise originated and the behavioral theories which have supported the methods of effective team building.

A team is a group of people working together to achieve a common purpose for which they hold themselves accountable. Effective teams are fast and flexible enough to respond to the challenges of the times. Teams today take many forms: management teams, ongoing work teams, improvement teams, and self directed work teams, to name a few.

Leavitt’s (1975) paper, entitled “Suppose We Took Groups Seriously… ,” Raises the possibility that both people and organizations would be better off if groups, rather than individuals, were the basic building blocks in the design and management of organizations.

Recent trends in organizational practice-such as the increasing use of quality circles, autonomous work groups, project teams, quality circles, autonomous work groups, project teams, and management task forces-suggest that groups are indeed becoming a popular way to get things done in organizations and management task forces-suggest that groups are indeed becoming a popular way to get things done in organizations.

Transforming (Paris, Salas and Cannon-bowers 2000: 471–499) teams of experts into expert teams necessarily begins with an understanding of what characteristics uniquely define the team and set it apart from small groups. Teams are more than collections of individuals and teamwork is more than the aggregate of their individual behaviors.

Team research in the 1990s has also evolved to a great extent around another theoretical development one which constitutes a significant and unifying thread underlying much of the current work in this field. This is the concept of the shared mental model (Cannon-Bowers et al. 1993). Mental models are knowledge structures, cognitive representations or mechanisms (e.g. mental simulations) which humans use to organize new information, to describe, explain and predict events (Rouse and Morris 1986: 349-363)), as well as to guide their interaction with others (Rumelhart and Ortony 1977, Gentner and Stevens 1983).

A team mental model would reflect the dependencies or interrelationships between team objectives, team mechanisms, and temporal patterns of activity, individual roles, individual functions, and relationships among individuals. Shared mental models allow team members to implicitly and more effectively coordinate their behaviors, i.e. they are better able to recognize the individual responsibilities and information needs of team-mates, monitor their activities, diagnose deficiencies, and provide support, guidance, and information as needed (Orasanu 1990, Entin et al. 1994, Duncan et al. 1996).

Team training is essentially a set of tools and methods that, in combination with required competencies and training objectives, form an instructional strategy’ (Salas and Cannon-Bowers 2000: 5). Cognitive, behavioral, and affective competencies necessary for effective teamwork drive the training objectives. Those objectives combine with available resources (tools and methods) to shape the development of specific instructional strategies. Training tools include, but are not limited to, team task analysis, task simulation and exercises, and performance measurement and feedback. Methods for delivery may be information-based, demonstration-based, or practice-based, and may include lectures, video or multimedia presentations, demonstrations, guided practice, and role-playing (Salas and Cannon-Bowers 1997, 2000).

Simulation and Teamwork Training

A simulation exercise can represent a global retail company on a map one square meter allowing participants an opportunity to learn from simulation that is described as one of the ‘powerful training tools’ (Harris1993). A simulation is a model or software package that simulates a business decision (Adobor and Daneshfar 2006) and is ‘information and rules, algebra and logic made visible to an observer’ Lane (1995: 604-625). Marting (1957) in ‘Top Management Decision Simulation’ describes how five teams of people act out the running of a real world company and input decisions on budgeting, marketing and production.

The positive outcome was seen to be that the turn around time for the calculations was fifteen minutes and the experience kept the participants intent on their roles during the simulation exercise. They discovered they continued to discuss and reflect on what went on highlighting a positive impact on teamwork. Graham and Stewart (1994) list the benefits of real world learning experience as dynamic collaborative and interactive processes. Validated simulation promotes greater depth of understanding and higher levels of retention while promoting the development of stronger critical thinking and analytical skills and generate enthusiasm for learning. Claims that the benefits of validated simulations are not modest Paul (1991) and perhaps understandably as the literature suggests are being used by many of today’s leading businesses.

In the1970’s it was possible to observe a range of opinion on the effects of simulation on teamwork. Wolfe (1976) believed that the potential of validated simulation use could help teams to learn and understand the need for creating a basic business strategy but he failed to observe any enhancement of an analytical approach to strategic decision-making. Wellington and Faria (1991) suggest that the pedagogical value of simulations should be focused on ‘deep learning’ and the acquisition of transferable skills.

However they conclude that transferable skills or the experiential learning necessary in teamwork such as the development and acquisition of decision-making and interpersonal communication skills may not be measurable. Teach (1993) suggests forecasting accuracy as a measure of learning correlating with measures of profits be used and goes on to comment that as a result the nature of simulation could change for better Teach (2007). Anderson and Lawton (1992b) found that only two of seven learning measures correlated with performance. The literature highlights a lack of agreement on factors employed to evaluate effects of learning in simulation on teamwork.

Binsted (1986) describes learning as a process that starts from the reception of input, to discovery, through reflection. Working to build transparent and comprehensible simulation models became the way forward and the 1980’s saw the advent of new computer simulations where the approach engaged with the perceived need to provide learning opportunities to teams.

Determinants such as atmosphere in simulation teams including task and emotional conflict Adobor and Daneshfar (2006) together with trust and cooperation Kramer (1999) and the quality of human resources Partington and Harris (1999) that make up the members of the simulation team may all affect teamwork. Team dynamics and cohesiveness together with the conditions that effect participants’ learning and cognitive function both in the simulation exercise and in teamwork will be described. Conventional less passive teaching methods and ‘surface learning’ will be observed for their appropriateness to the modern world of work.

As opportunities for more imaginative and innovative learning experiences have been taken up, the new method of learning brought about by the resultant increase in simulation use will be analyzed. The introduction of a new more active approach to teaching and learning using simulation exercises and their effects on teamwork will be described. This work will recommend the nature of conditions and situations that effect learning opportunities in simulation and its influences on teamwork.

A review of literature suggests that there is a lack of agreement on factors employed to evaluate effects of simulation on teamwork. As a result of this, analysis can support evidence that an understanding of the simulation exercise as a process concerned with practical, decision-making and communication skills relevant to getting the job done. Simulation involving group work provides an opportunity for interchange of ideas, problem – solving exercises and together with being a valuable learning experience can help to develop an ability in the participants to work effectively as part of a team. Simulation could heighten interest and also enhance and motivate learning and in turn encourage effective teamwork.

Simulation exercises place a priority on learning rather than teaching and also shift the emphasis away from the transfer of knowledge towards the acquisition of knowledge, towards deep learning and the acquisition and enhancement of critical and strategic thinking skills necessary in developing teamwork.

In conclusion this work that in relevant situations and taking factors and appropriate theories into account validated simulations promotes learning amongst its participants thereby effective in developing teamwork.

Research Methodology Literature Review

The following discussion is about methodology literature. This will give us a clear vies as to what methodology needs to be adopted for the process. Both Saunders et al (2003); Johnson and Duberley (2000) and Bryman and Bell (2003) outline the facets, benefits and drawbacks of using questionnaires. The main issue facing the researcher is that the questionnaire asks the correct questions that probe the areas that will best answer the research question.

Bryman and Bell (2003) and Saunders et al (2003) suggest methods to improving response rates by questionnaire design and these were adhered to during the research collection phase. Bryman and Bell (2003:123) suggest writing a covering letter or introduction that outlines the identity of the researcher, the auspices under which the research is being conducted, the function of the research and the kind of information to be collected.

Brace (2004: 113), highlights issues that need to be considered when writing a questionnaire. These include the language and style of language in which the questionnaire is written, ensuring that there is no ambiguity in the questions or the responses, the pre-coding system to be used, and the use of prompt material. The author continues to point out that the bias caused by the ordering of the questions, and the ordering of the prompted responses is also an issue when writing a questionnaire.

When writing the questionnaire it is important to ensure that the respondents will understand the questions, and not feel intimidated, threatened or challenged by the questions (Brace 2004:114). To best avoid this, Brace (2004) argues that the questions should be phrased in everyday language with limited use of technical terminology, so that the questionnaire seems like a “conversation by proxy” between researcher and respondent.

Ambiguity is also outlined by the author, and has the potential to render a piece of research incapable of interpretation and therefore useless if present in a study.

Bryman and Bell (2003) indicate that by using a Likert scale for questionnaires, responses it is possible to quantify the response which makes analysis easier.

Pre-codes are ways of grouping answers from qualitative questions into categories to ease quantification of results for analysis. It provides consistency of response by forcing an open-ended question into a limited number of answers. Brace (2004), Bryman and Bell (2003) and Saunders et al (2003) highlight the importance of using pre-codes to improve a questionnaire’s effectiveness. All three authors emphasise that a good questionnaire will provide a comprehensive list of responses to choose from, whilst also providing an “other” category so as not to force respondents into a response that does not fully represent their opinion. Care should be taken to ensure that pre-codes are mutually exclusive, as exhaustive as possible, as precise as necessary, and meaningful (Brace, 2004).

Research Questions

With the research being predominantly a study of human behaviour, an inductive approach was chosen. Descriptive, action research, and deductive strategies were all deemed inappropriate due to this research being based on building a theory and answering a research question rather than testing a hypothesis or describing a business method or model. An Experimental design strategy was dismissed as according to Saunders et al (2003); Bryman and Bell (2003) and Johnson and Duberley (2000), a true field experiments are very rare in business and management research, due in no small part to the difficulty on achieving control and establishing validity.

Cross sectional design is a strategy that employs systematic and standardised methods for gauging variation, quantifying this variation to determine and examine relationships between variables. The systematic and standardised methods for gauging variation include questionnaires and semi-structured interviews. Thus, with the other strategies inappropriate for use here, the most appropriate strategy to adopt for the research is a cross-sectional design. This entails “the collection of data…at a single point in time in order to collect a body of quantifiable data in connection with two or more variables, which are then examined to detect patterns of association” (Bryman and Bell, 2003: p.48).

The project has been undertaken to ascertain if validated simulation training program an effective method for improving teamwork. This involves a study if validated simulation provides the required parameters required for teamwork training. This involves a study of whether the required skill sets are included in the training process. Moreover if it has been successful in adding the team building competencies such as team collaboration, problem solving, conflict resolution and effective coordination of the team.

To come to a conclusion to the main on that the research aims to answer, we’ve made seven hypothesis on the basis of which (i.e. its success or failure) we will try and evaluate the main question. The questions have been drawn from our understanding of the literature on validated simulation’s effectiveness on teamwork development.

Our first question is based on the theory that simulations are realistic and ‘induce real world-like responses by those participating in the exercise’ however the amount of realism may vary (Lane 1995). It is believed that trainees learn faster when they get hands on experience (Harris, 1993) thus making simulation an experiential learning pedagogy. Thus our second question is,

  • Q1: Does simulations create real world like experience making training more effective?
    • From the findings of Gospinth and Sawyer (1999) which postulates that validates simulations makes trainees see a relationship between their decisions and results which boosts their involvement and their learning. It is argued that it boosts their motivational level to a great extent which helps them to learn more. This brings us to our sixth question. This helps us to draw our second hypothesis.
  • Q2: Does simulation makes team decision making stronger and imbibes a trust among trainees regarding team decisions?
    • Another study concluded that the simulation encouraged interaction, cooperation and teamwork. This shows that simulation effectively increases interaction and cooperation among team members. This brings us to our third question.
  • Q3: Do teams learn to cooperate more through simulation training processes?
    • Another stream of literature believes that simulations might produce an increase in motivation, cognitive learning, and changes in structure and relationships. From this we can get the hypothesis that,
  • Q4: Does simulation exercise motivates teams and brings a sense of engagement among team members?
    • We wanted to understand if the simulation did away with the past fault lines in the program and helped the students to identify ways to rectify the drawbacks.
  • Q5: Does the simulation help evaluate past faults in practice and take a step to rectify the same?
    • If the training program does not keep the participants engaged then it becomes very difficult to deliver results as the participants will not be attentive. So it is important that a program engages the participant throughout so that the whole message of the training seeps in them. For this reason we consider our next question.
  • Q6: Was the classroom simulation engaging?
    • Some critics of simulation based training say that it comes down to luck while Burns et al (1990) agree that performance can be affected by luck and Washbush and Gosen (1998) suggest that the effects of luck may be attenuated but not eliminated. This criticism brings us to our seventh question.
  • Q7: Does the success of simulation on team building is dependent on luck rather that effective method?

Methodology

The 100 students who studied MANG 3008, Strategic Management, were asked to undertake a business simulation over the course of six weeks upon which a piece of coursework would be based. Out of 100 students 56 responded. These responses were analysed. The simulation was conducted on 100 students, and there were 25 teams with each team size being of 4 members. There were 44 male and 56 female participants. The simulation which was used was one called Total Enterprise Simulation.

These students were all sent a questionnaire to complete and return via email. The research needed to be salient to the respondents to both improve response rates but also to improve the quality of response, according to Bryman and Bell (2003). This was achieved by asking students who had already showed an interest in both management and simulation, due to them having chosen to study the module. Saunders et al (2003) outline that a research project should use the correct instruments but also allow sufficient time and utilise existing contacts in order to be successful.

Internal validity raises the question how confident can we be that the independent variable really is at least in part responsible for the variation in the dependant variable? The internal validity of a cross-sectional research design is typically weak due to the fact that the research, by definition, is designed to produce associations rather than findings from which causal inferences can be unambiguously made (Bryman and Bell, 2003).

From the literature we have studied we came across a few questions which required validation. To understand the effectiveness of the research questions, we conducted a survey. The questionnaire was developed after a detailed study of literature on effective questionnaire development and the literature on team building, validated simulation and its effectiveness on teambuilding process. To come to the detailed and final questionnaire we conducted a pilot survey. Then the final survey was done through an on-line survey program. The population was all who had undergone a simulation based teambuilding exercise.

As Bryman and Bell (2003, p.145) outline, response rates are important because the lower a response rate, the more questions are likely to be raised about the representativeness of the achieved sample. They go on to report, however, that in a non-probability sample (which was used in this research) the response rate is less of an issue, as generalisation is not the aim of the sample. The Literature Review outlines how response rates can be improved by questionnaire design and pilot studies, and these guidelines were followed.

Self-completion questionnaires provide several advantages and disadvantages when compared to semi-structured interviews, another method normally associated with a cross sectional design methodology (Bryman and Bell, 2003:142). Self-completion questionnaires are quicker and cheaper to administer, provide no interviewer effects (such as social desirability bias), as there is no interviewer present. There is convenience for respondents as they go at their own speed, and self-completion questionnaires do not suffer from the problem of interviewers asking questions in a different order or different way, interviewer variability (2003:143).

When using self-completion questionnaires as opposed to semi-structured interviews, one cannot prompt respondents if they are having difficulty in answering a question correctly, or even probe respondents to elaborate on their answers. No additional information can be picked up when using a questionnaire, as an interviewer is not present to collect supplementary information that is not dealt with by a specific question. As the design methodology uses a questionnaire that has been pilot tested and uses a simple Likert scale with little room for additional information on specific questions, some of the disadvantages are alleviated.

This is because the research aims to gather specific and specialised information rather than requesting general observatory information on an organisation or group, where a semi-structured interview might gain preference to a self-completion questionnaire (Bryman and Bell, 2003).

The questionnaire is based on a five point scale. There are ten questions to be answered by the respondents and they are to rate their preference on strongly agree, agree, neither disagree nor agree disagree, disagree, strongly disagree. The respondents are supposed to choose one of the options based on their preference. The options are simple and easy to understand with no usage of technical jargons. The questionnaire was made in a form so that it addressed all the questions which would have a direct or indirect bearing on the hypothesis.

The method for developing a questionnaire that is valid, reliable and effective in collecting relevant data is outlined in the Literature Review. The questionnaire was careful to avoid leading questions and bias, whilst providing respondents with opportunities to disclose their opinions in a confidential environment.

During the design phase the issues of validity and reliability had to be addressed. Validity is defined by Saunders et al (2003) as “a concern with the integrity of the conclusions that are generated from a piece of research”, with reliability being “the consistency of a measure of a concept”. Validity can be split into internal, external, measurement and ecological validity (Bryman and Bell, 2003), with reliability being split into stability, internal and inter-observer reliability.

Due to the questions in the questionnaire being designed carefully and with a clear layout and explanation of purpose as recommended by Saunders et al (2003), along with a pilot study of the questionnaire from which alterations were made at the behest of subjects, there were no obvious issues concerning measurement validity. According to Johnson and Duberley (2000), subjects of the research will interpret some adjectives and descriptors stronger than others, and for this reason it is imperative to distribute the questionnaire broadly so that any anomalous outlooks can be grouped to achieve a more stable and representative viewpoint.

As the questionnaire was given to students of the Strategic Management module, the research was employing a “convenience sample” (Bryman and Bell, 2003), because of this; the external validity of the research is questionable. External validity is the issue of whether the results of a survey or research can be generalised beyond the research context (Saunders et al, 2003). Due to the research being based on a non-random sample, there may be issues concerning the representation of the whole population from this sample, as the subjects within the sample may have many attributes in common (e.g. personal preference or beliefs picked up from previous elements of the course) and this may provide low variety within the sample.

The very fact that the subjects of the self-completion questionnaire had to undergo the unnatural task of answering formalised questions may mean that the findings may have limited ecological validity (Cicourel, 1982 via Bryman and Bell, 2003).

The internal reliability of the research was addressed in reference to Terence Jackson (2001) and his study of Cultural values and Management ethics. In this study Jackson provided twelve statements (indicators) that respondents rated using a Likert Scale in accordance with how they personally ethically viewed the statement. As the statements all concerned the same issue, the study was internally reliable as the respondent’s scores on any one indicator tended to be related to their scores on other indicators. This research will employ the same technique of asking respondents to score indicators, and so there appear to be no issues of internal unreliability.

By using a Likert scale in the questionnaire, the questions were closed, and so the task of processing the data for computer analysis became quite simple (Bryman and Bell, 2003). As self-completion questionnaires were used, and there were no open-ended questions, there were no inter-observer reliability issues.

The questionnaire was sent to the 100 students who studied MANG 3008. This means that this was a use of a convenience sample as the sample was simply available by virtue of its accessibility (Bryman and Bell, 2003: p.105). This means that there may be issues generalising the findings, because it cannot be ascertained the degree to which this sample represents the population. Bryman and Bell (2003) recommend that it is fairly acceptable to use a convenience sample when a chance presents itself to gather data from a convenience sample which represents to good an opportunity to miss. This sort of research, the authors go on to discuss, will not allow for definitive findings, but could provide the basis for further research, or be amalgamated with existing findings on the subject.

Some statisticians have no problem with analyzing individual Likert-type items using t-tests or other parametric procedures (Sisson & Stocker, 1989), provided the primary interest is in location only. If the survey process produces order and normality, normal theory procedures can be employed regardless of the attained measurement level. We do not dispute the logic, but disagree with the premise when discussing Likert-type items.

It is difficult to see how normally distributed data can arise in a single Likert-type item. The data will frequently be skewed, and often these items do not capture the true limits of the attitude. An individual item will often produce distributions showing a floor or ceiling effect – respondents choosing the lowest or highest available alternative. In these situations, the true mean for a Likert-type item may not be measurable because of limitations imposed.

For the analysis of the data we consider central tendency as our statistical tool. The word ‘average’ denotes a ‘representative’ or ‘typical value’ of a whole set of observations. It is a single figure that describes the entire series of observations with their varying sizes. Since a typical value usually occupies a central position, so that some observations are larger and some others are smaller than it, averages are also known as measures of central tendency.

There are three measures of central tendency – mean, median and mode. Mean is defined as the sum of a set of observations divided by the number of observations. It is simple to calculate and easy to understand. It suffers from s disadvantage, i.e. it is highly affected by the presence of extreme values i.e. extremely large or small values.

Median of a set of observations is the value of the middle most item when they are arranged in order of magnitude. It can be calculated from a grouped frequency distribution either by using simple interpolation in a cumulative frequency distribution, or by using the formula:

Median = l1 + {(N/2) – F}/fm * c

Where,

  • l1 = lower boundary of the median class
  • N = total frequency
  • F = cumulative frequency
  • fm = frequency of the median class
  • c = width of the median class.

Median is, in a certain sense, the real measure of central tendency, as it gives the value of the most central observation. It is unaffected by extreme values and can be calculated from distribution of open ended classes.

Mode of a given set of observations is that value which occurs with the maximum frequency. It is often used in management studies as it is most likely to occur.

Quartiles are used for measuring central tendency; dispersion and skewness. The lower and the upper quartile are used to define quartile deviation which is a measure of dispersion. We consider three measures of dispersion. Standard deviation, range and Quartile deviation is defined as half the difference between the upper and the lower quartile. Standard deviation of a set of observations is the square root of the mean of squares of deviations the mean. And range is simply the difference between the maximum and he minimum value of the set of observations.

Analysis

The research questions were analyzed using frequency distributions of the responses to the survey questions, which deals with the effect of the simulation program on team building on a class of students. We do the analysis by evaluating the questions using

There were 100 students to whom the questionnaire was sent; out of them we received 56 responses. It is clear that the overall impression of the students about the simulation exercise is positive. A further statistical analysis will answer the questions that had been posed in the research question section. As we assume that the concerned data arising from a single, homogenous population, and are therefore limited to the analysis of covariance of correlations.

The results of the survey can be analyzed in two parts where we will first interpret the descriptive statistics. From table 2 we get the mean and the standard deviation of the data. We have used Likert Scale for rating of the questions. Now we ascertain values to the Likert scales. From the available data, taking the ratings of the Likert scale to be:

  • Strongly Disagree = 1
  • Disagree = 2
  • Neither = 3
  • Agree = 4
  • Strongly Disagree = 5

Let us evaluate each of the research questions before we come to a plausible conclusion as to what the overall findings suggest.

First we will consider a summary of the response we got from our survey. In table 1 we see that the responses for each of the questions for each scale have been summed. For instance, in case of Q1, there are 2 respondents who ‘strongly disagree’ that simulation provided realistic scenario. Similarly 8 respondents ‘disagreed’, 8 said ‘neither’, 22 ‘agreed’ and 16 ‘disagreed’. Along with these responses we have given the percentage of respondents who agree to a particular rating. From this summarized table we can deduce that most of the responses tend towards an agreement to the statement i.e. they lie between ‘agree’ and ‘strongly agree’.

Table 1.

To what extent do you personally agree with the following statements:
Strongly DisagreeDisagreeNeitherAgreeStrongly AgreeRating Average
The simulation provided realistic scenarios (Q1)3.6% (2)14.3% (8)14.3% (8)39.3% (22)28.6% (16)3.785
There was a positive climate in our team(Q2)1.8% (1)10.7% (6)7.1% (4)44.6% (25)35.7% (20)4.02
Our team worked together well(Q3)3.6% (2)12.5% (7)10.7% (6)37.5% (21)35.7% (20)3.89
I feel that I improved my team working skills during the simulation(Q4)5.4% (3)14.3% (8)16.1% (9)42.9% (24)21.4% (12)3.61
I saw a relationship between the decisions the team made, and the results(Q5)5.4% (3)8.9% (5)16.1% (9)51.8% (29)17.9% (10)3.68
Our team reflected on previous results(Q6)8.9% (5)19.6% (11)14.3% (8)42.9% (24)14.3% (8)3.34
I found the simulation engaging(Q7)5.4% (3)10.7% (6)17.9% (10)37.5% (21)28.6% (16)3.73
Luck was the main reason for our successes(Q8)41.1% (23)25.0% (14)12.5% (7)17.9% (10)3.6% (2)2.18
All members of our team contributed(Q9)7.1% (4)10.7% (6)3.6% (2)32.1% (18)46.4% (26)4
I felt motivated throughout the simulation(Q10)5.4% (3)23.2% (13)10.7% (6)30.4% (17)30.4% (17)3.57

Considering table 2 for further analysis, we calculate the mean, median mode for all the questions and the standard deviation, and the first and third quartile from the data.

From table 2 it is clear that the mean for all the responses to all the ten questions lie between 2.2 and 4.2. The average mean for the whole set of questions is 3.6 with an average standard deviation lying at 1.16. This indicates that maximum number of responses were in the region where the weights to the responses were 3 and 4, i.e. neither and agree. But this is contrary to the findings of table 1 where the maximum number of responses were in the area of 4 and 5 i.e. ‘agree’ and ‘strongly agree’.

Table 2.

MeanMedianModeSDMinMaxFirst QuartileThird Quartile
The simulation provided realistic scenarios4.2441.131535
There was a positive climate in our team4.0441.025145
Our team worked together well3.9441.145135
I feel that I improved my team working skills during the simulation3.6441.145134
I saw a relationship between the decisions the team made, and the results3.7441.055134
Our team reflected on previous results3.3441.215124
I found the simulation engaging3.7441.155135
Luck was the main reason for our successes2.2211.255113
All members of our team contributed4.0451.265145
I felt motivated throughout the simulation3.6441.295125

Table 3.

Descriptive Statistics
NMeanStd. DeviationMinimumMaximum
Q1563.731.12015
Q2564.001.00915
Q3563.841.12515
Q4563.381.10515
Q5563.681.04615
Q6563.321.19315
Q7563.681.13015
Q8562.181.25215
Q9564.001.26515
Q10563.571.29115

Table 4 (a).

One-Sample Statistics
NMeanStd. DeviationStd. Error Mean
Q1563.731.120.150
Q2564.001.009.135
Q3563.841.125.150
Q4563.381.105.148
Q5563.681.046.140
Q6563.321.193.159
Q7563.681.130.151
Q8562.181.252.167
Q9564.001.265.169
Q10563.571.291.173

Table 4 (b).

One-Sample Test
Test Value = 3.53
tdfSig. (2-tailed)Mean Difference95% Confidence Interval of the Difference
LowerUpper
Q11.35155.182.202-.10.50
Q23.48655.001.470.20.74
Q32.05855.044.309.01.61
Q4-1.05055.298-.155-.45.14
Q51.06355.293.149-.13.43
Q6-1.30955.196-.209-.53.11
Q7.98455.329.149-.15.45
Q8-8.07855.000-1.351-1.69-1.02
Q92.78155.007.470.13.81
Q10.24055.811.041-.30.39

Table 5.

Test Statistics
Q1Q2Q3Q4Q5Q6Q7Q8Q9Q10
Chi-Square23.107a41.321a25.250a15.250a38.286a22.929a19.536a22.393a38.286a14.714a
df4444444444
Asymp. Sig..000.000.000.004.000.000.001.000.000.005
a. 0 cells (.0%) have expected frequencies less than 5. The minimum expected cell frequency is 11.2.

Doing a further in-depth analysis, we consider the frequencies and percent of responses for each question from table 1 and table 2 we try to ascertain the response of the students to each questions in order to come to a more certain solution, rather than a vague answer.

We first consider the first research question that we have. The first question is: Does simulations create real world like experience making training more effective?

First we consider the frequency of responses for the likert scale for question 1 here we see, from the table 1, that the percentage of responses as well as the frequency of responses is more for ‘Agree’ and Strongly Agree’ as both of them together have scored a response percent of 67.9%. This gives a clear indication that the students agree to the first research question. Again from table 2, we see that the mean of the first question is 4.2.

Moreover the mode i.e. that gives the most frequently occurring value in a range of data is 4. The median too is 4. Furthermore, if we consider the values for first and third quartile i.e. 3 and 5, which signifies that the responses are most in a range of 3 and 5 which when we consider in the likert scale indicates agreement to the question. Thus the students believe that simulations create real world like experience making training more effective. This is in confirmation to the study of Keys’ and Wolfe’s (1990) study that simulations are realistic and ‘induce real world-like responses by those participating in the exercise’. Altymer (2000) believes that ‘Simulation has demonstrated that learning styles with computer support offer students a sense of ‘real time/real place’ activity and that the concentration and motivation appear greater’.

The second question is: Does simulations help team members see a relationship between the decisions they take and the outcomes?

Table 1 gives the frequency of the responses from the fifth question in the questionnaire, i.e. “I saw a relationship between the decisions the team made, and the results”. This shows that almost 70% of the students believe that their training helped them identify a relationship between the decisions taken by the team and their outcome.

Further, from table 2 we see that the mean is 3.7. With a standard deviation of 1.05 it can be deduced that most of the responses lies in the zone of agreement to the statement. Again, considering the mode, we see it being, 4. This implies that most of the students “agree” to the statement made. Considering median and the quartile values we see that it lie between 3 and 4 and the median is 4. Thus, this analysis signifies that the students saw a relationship between the decisions taken by the team and the outcomes, proving our second research question to be correct. We find similar results in other research available in the literature.

There may be a need to make decisions to solve a problem and do well. Gospinth and Sawyer (1999) state that when ‘users see a relationship between users decisions and results the greater would their involvement and their learning’. This analysis is made that simulation exercises might encourage motivation to do well and facilitate active involvement in learning and may be positively associated with improved teamwork.

The third question: Do teams learn to cooperate more through simulation training processes? This question is answered by Q4 and Q9.

In case of Q4, which states that “I feel that I improved my team working skills during the simulation”, 44% of the respondents believe that they improved their teamwork skills during the simulation. From this figure we cannot assertively say that simulation helped to improve teamwork of the studied group neither can we say that it did not because the data shows only 19.7 of the respondent believe that it did not enhance teamwork skills.

Most of the respondents are inclined towards a neutral response. The mean fro this question is 3.6 and the standard deviation is 1.14. Moreover, the mode fro this question is 4 which show that most of the students ‘agree’ to the statement. Further, the median is 4 and the first and third quartile lies between 3 and 4. Thus we can conclude that the students believe that the simulation has improved their team working capabilities.

Now from Q9, which states “All members of our team contributed”, we find that only 17% of the population disagree or strongly disagree that all the members of the team contributed. More than 70% believe that there was some contribution done by all the team members. Further, considering the mode we see it is 5, i.e. students ‘strongly agree’ that all the team members contributed. This statement is supported by the median and the quartile values too.

Thus, considering both question four and nine together we may conclude that there is an agreement among students as to the positive effect of simulation on teamwork capabilities of the individuals. Thus, the students learn to cooperate more. This is validated in the literature study. As Koch (1991) and participants evaluated a political campaign management simulation and found that two-thirds of the participants expressed positive reactions to the simulation on every measure. They enjoyed the team aspect of the activity and interaction with others was a positive experience. They concluded that the simulation encouraged ‘interaction, cooperation and teamwork’.

Altymer (2000) found that if students continue to satisfy their curiosity by continuing their individual learning after a simulation exercise an opportunity for greater interaction to augment learning exists. He concludes that ‘Promoting group and team skills, delegation of duties, developing trust, building self confidence and sharing research strategies are additional by products of simulation’ that may effect teamwork.

The fourth question states: Does simulation exercise motivates teams and brings a sense of engagement among team members? This is answered by Q2 and Q10.

The statement for Q2 is “There was a positive climate in our team”. From the frequency distribution of the responses for Q2 we find that more than 80% of the respondents believe that there was a positive climate in the team during the simulation. From the mean of the responses for this question, which is 4, we see that the tendency is toward an agreement fro this particular statement. The median also lies at 4, and the quartiles lie between 4 and 5. This indicates a strong tendency to strongly agree to the statement. To be more sure, we consider the mode. This too lies at 4 implying most of the respondents s agree to the statement that “There was a positive climate in our team”.

Analyzing Q10, i.e. “I felt motivated throughout the simulation”, we find that 60% of the respondents believe that they felt motivated throughout the simulation. The mean for this particular question is 3.6 and the median is 4. Though, the quartiles lie between 2 and 5, which gives a very vague response. But from the mode, which is 4, we can assert that most of the students agree to the statement made.

Thus, from the analysis of both Q2 and Q10 we can conclusively say that the simulation brought a sense of engagement among team members which made them motivated. This implies that the students believe that the motivational level has increased after the simulation but the extent of the motivational level is not extremely high. Literature on the subject too supports this finding. Gray and Walcott (1997) speculate that simulations might produce an increase in motivation, cognitive learning, and changes in structure and relationships. Cole (1993) describes Herzberg’s Theory of Motivation where factors closely connected to the job motivate behavior.

Therefore we can conclude that appropriate simulation encourages employee motivation and enhances the critical and strategic thinking skills of users and in turn it may be concluded effective and have a positive effect on interpersonal skills, communication and problem solving skills associated with developing teamwork. Simulation involving group work provides an opportunity for interchange of ideas, problem – solving exercises and together with being a valuable learning experience can help to develop an ability in the participants to work effectively as part of a team.

Studying the alternatives wherein research has found a reverse or diminishing effect of validated simulation as an effective way of developing teamwork

Considering the fifth question of the research: Does the simulation help evaluate past faults in practice and take a step to rectify the same? This question can be answered from the response to Q6 question of the questionnaire which states “Our team reflected on previous results”.

66% of the responses support the statement. They ‘agree’ or ‘strongly agree’ that simulation training helps teams identify previous faults and rectify them. Moreover the mean for the question is 3.3 which are below the average mean of all the questions. Further, the median is 4 but the quartiles are between 2 and 4. This implies that there may be a tendency of the respondents to signify an overall neutral response to the question. The mode for this question is 4 thus signifying that majority of the students believe that the teams reflected on the results. But because the mean and the median do not give a clear indication regarding the respondents’ choices we cannot conclusively say that simulation training helps teams to identify past faults and rectify them.

The sixth question is: Was the classroom simulation engaging? This is answered through the question “I found the simulation engaging” i.e. Q7.

From table 1 we find that 74% respondents feel that the simulation was engaging.

The mean of the sample response to Q7 is 3.7. Further, the median is 4 and the quartiles are 3 and 5. This clearly indicates a positive response of the students. The mode lies at 4. Thus we can positively conclude that there exists an agreement between the student’s responses and the statement made in Q7. This is also proven in Gospinth and Sawyer’s (1999) study where they state that when ‘users see a relationship between users decisions and results the greater would their involvement and their learning’. This analysis is made that simulation exercises might encourage motivation to do well and facilitate active involvement in learning and may be positively associated with improved teamwork.

The seventh question in the research is: Does the success of simulation on team building dependent on luck rather that effective method? This is answered by question Q8 i.e. “Luck was the main reason for our successes”.

In this question, 66% ‘disagree’ or ‘strongly disagree’ to the question that the success of the simulation depends on luck. 12% are inclined toward neither of the propositions. Whereas, only 20% of the respondents ‘agree’ or ‘strongly agree’ to the statement. This clearly indicates that the respondents believe that simulation was well tried and its success was not by a stroke of luck. Moreover, the mean of the responses lies at 2.2 that clearly indicate disagreement. The median is 2 and the quartiles are 1 and 3. They too indicate that there is no connection between luck and the success of the simulation.

The mode is 1 which shows that most of the students ‘strongly disagree’ that luck influences the success of the simulation. This refutes the researches that literature provides like that of Thorngate and Carroll (1987) who argue that simulation performance comes down to luck while Burns et al (1990) agree that performance can be affected by luck and Washbush and Gosen (1998) suggest that the effects of luck may be attenuated but not eliminated.

Now to ascertain the final research hypothesis that is a validated simulation helps in increasing group teamwork capabilities. From table 2 we see that the average of the means of responses to all the questions is 3.6 and the median is 3.8. The mode too is at 3.8. So we can deduce that the overall analysis supports our study i.e. a validated simulation increases group team work capabilities. This finding is supported by the following studies.

Raia (1966) demonstrated that proper use of a carefully crafted and appropriately complex simulation exercise could heighten interest and also enhance and motivate learning and in turn encourage effective teamwork. Validated simulation exercises place a priority on learning rather than teaching. The emphasis is away from the transfer of knowledge towards the acquisition of knowledge, towards deep learning and the acquisition and enhancement of critical and strategic thinking skills necessary in developing teamwork. Teach and Patel (2007) emphasize simulations permit ‘experiential learning to teach many things that are difficult to teach vis-à-vis lecturing’. Research suggests that effective teamwork skills are difficult to acquire from the traditional passive teaching methods in a classroom setting.

Greater depth of understanding and higher levels of retention while promoting the development of stronger critical thinking and analytical skills and generating enthusiasm for learning are thought to be necessary qualities in teamwork. Kolb’s (1974) cycle of four stages in the learning process where the experience; observation and reflection; theorizing and conceptualization; testing and experimentation may all occur during a simulation exercise. One could argue that in the real world of work there may not be the time for feedback in the form of seeing the consequences of an input or an action whereas it occurs in the simulation exercise.

When the learning cycle completed knowledge is improved and if active learning in the simulation exercise occurs it may have a positive effect on teamwork. Building on Kolb’s theoretical base Honey and Mumford (1992 and 1986) major categories of learning styles in relation to team dynamics and how when included in the human resources of a team participating in a simulation exercise may also have a positive impact on teamwork.

Limitations of the Study

This thesis has some limitations. Specific limitations of the research and the methodology are discussed below:

  1. The research had to be based on post training feedback as responses to the survey. But for effective evaluation of a training program the most efficient model is Donald Kirkpatrick’s 4-step training evaluation model. Where it is important to take a pre-training feedback of the trainees. But in our research we had to rely on post-training feedback only.
  2. We had to limit our analysis to one-factor T-test as because we did not have a pre-training feedback as well as post-training feedback. The assessment would have been more complete if we could both. So it is imperative to take a pre- and post-training feedback for complete evaluation to take place.
  3. Our study is limited due to the small sample size. Moreover, it does not do any analysis of the effect of demographics or other environmental factors which may affect the success of the training. Our survey was limited and did not ask questions as to how the training was conducted. This would have given us an idea regarding the mode of conducting the simulation and its effectiveness.
  4. We did not consider the effect of a simulation program in different industries or different organizational structures. This would have given a comprehensive picture about the effectiveness of validated simulation on teamwork training in general.

Conclusion

To conclude, due to the underlying shortcomings of this research, it can be extended further. More work needs to be done on it with the aid of a proper training evaluation model so that all the aspects and responses from the sample can be collected at different times before and after the training is conducted. Moreover, we need to see if the simulation has same effect on similar population group, i.e. if it has any specific effect on a particular age group or specific sex. Then the study can be further done on the effect of simulation in different kinds of industries and its effect on team building. This whole analysis will complete this research.

References

Lane D, C. (1995) ‘On a Resurgence of Management Simulations’ and Games Journal of Operational Research Society, vol 46, No. 5, p. 604-625.

Marting E. (1957) Top Management Decision Simulation: The AMA Approach. AMA, New York.

Feinstein A.H. ,Mann S., Corsun D.L. (2002) ‘Charting the experiential territory’ Journal of Management Development , vol 21 no. 10, p 732-744.

Altmyer D.J. (2000) ‘Using an online stock market simulation as a cross – disciplinary learning enhancer: simulation as an example of grey literature’ The International Journal on Grey Literature vol 1 No 3, p 121-127.

Gentry J. (1991), The Guide to Business Gaming and Experiential Learning, Nichols Publishing, East Brunswick.

Raia A, P. (1996), ‘A study of the educational value of games’. The Journal of Business, vol 39, No. 3, p.339-352.

Wolfe J. (1976), ‘The effects and effectiveness of simulations in business policy teaching applications’, The Academy of Management Review, vol. 1No. 2, p. 47-56.

Newhauser J. (1976) ‘Business games have failed’ Academy of Management review, Vol 1, No. 4, p. 124-129.

Keys B and Wolfe J. (1990) ‘The role of management games and simulations in education and research’ Journal of Management, vol. 16, p.307-336.

Holtham C. (1992) ‘Artificial environments’ Times Higher Educational Supplement.

Adobor H., Daneshfar A. (2006), Management simulations: determining their effectiveness’ Journal of Management Development. Vol.25, No. 2, p.151-168.

Kramer R.M. (1999), ‘Trust and Distrust in Organisations: emerging Perspectives, Enduring Questions’ Annual Review Psychology, Vol 50, p.569-598.

Partington D., Harris H. (1999), ‘Team role balance and team performance: an empirical study’ Journal of Management Development Vol 18, No. 8, p. 694.

Binsted D. (1986), ‘Developments in Interpersonal Skills Training’ Gower Aldershot.

Anderson P.H., Lawton L. (1992), ‘The relationship between financial performance and other measures of learning on a simulation exercise’, Simulation and Gaming Vol.23 p. 326-340.

Thorne J. (1992) ‘New stimulus for those simulations’ Management Today.

Elgood C. (1988), Handbook of Management Games, 4th edition Gower: Aldershot, UK.

Sale J.T. (1972), ‘Using Computerised Budget Simulation Models as a Teaching Device’, The Accounting Review, vol. 47, no.4. p 836-839.

Koch Nadine S. (1991) ‘Winning is not the only thing: An evaluation of a Micro-Computer campaign Simulation’ Political Science and Politics, vol. 24, No 4, p 694-698.

Gray V. and Walcott C. (1977) ‘Simulation, Learning, and Student Attitudes’ Teaching Political Science, vol 4 no 3.

Paul Ray J. (1991) ‘Recent Developments in Simulation Modelling’ The Journal of the Operational Research Society, vol 42, no 3 p 217-226.

Davidsen P,I.. Bjurklo M. and Wikstrom H. (1993). ‘Introducing system dynamics in schools: the Nordic experience’ System Dynamics Review, vol 9, p 165-181.

Teach R. (1993) ‘Forecasting Accuracy as a performance measure in business games’ Simulation & Gaming, vol 24, no.4, p. 476-490.

Teach R. (2007) ‘Forecasting Accuracy and Learning: A Key to Measuring Business Game Performance’ Developments in Business Simulation and Experiential Learning, vol.34. p 57-66.

Teach R. Patel V. (2007) ‘Assessing Participant Learning in a Business Simulation’ Developments in Business Simulation and Experiential Learning, vol 34, p 76-84.

Sneddon I. Kremer J. (1994) An Enterprising Curriculum: Teaching Innovations, Higher Education HMSO: Belfast.

Peach E. Brian and Hornyak M. (2003) ‘What are simulations for?: Learning objectives as a simulation selection device’ Developments in Business Simulation and Experiential Learning, vol 30, p 220-224.

Robinson S. and Pidd M. (1998) ‘Provider and customer expectations of successful simulation projects’ Journal of Operational Research Society Vol 49,No,3, p.200-209.

Senge P. M. (1990) ‘The leader’s new work: building learning organisations’ Sloan Management Review, vol 32, p 7-23.

Burns, A., Gentry, J. and Wolfe, J. (1990) ‘A cornucopia of consideration in evaluating the effectiveness of experiential pedagogies’ in Gentry, J. Guide to Business Gaming and Experiential Learning. London.

Thorngate W. Carroll B. (1987) ‘Why the best person rarely wins’ Simulation and Games, vol 18, p 299-320.

Reid M A. and Barrington H A. (1994) Training Interventions: Managing Employee Development, Institute of Personnel and Development The Cromwell Press Wiltshire 4th edit. p 404.

Cole G A. (1993), Personnel Management, 3rd ed. DP Publications Ltd.: London.

Charles W., Kleiner Brian H. (1996) ‘Which training methods are effective?’ Management Development Review vol. 9, No. 2, p. 24-29.

Honey and Mumford (1992 and 1986), Personnel Management, DP Publications Ltd. :London.

Graham J.J., Stewart S. (1994) ‘Live project: achieving deep learning in hospitality education’ Proceedings of Innovation in Learning and Assessment in Hospitality Management Education Conference, Leeds Metropolitan University.

Jehn K A.and Mannix E A. (2001) ‘the dynamic nature of conflict: a longitudinal study of intragroup conflict and group performance’ Academy of Management Journal, vol 44, no.2, p238-251.

Thompson T A., Purdy JM., Fandt P M. (1997) ‘building a strong Foundation: using a computer simulation in an introductory management course’ Journal of Management education, vol 21 no.3 p.418-434.

Curry B and Moutinho L. (1992) ‘Using computer simulations in management education’ Management education and development vol.23 no.3 p155-167.

Tompson G H. and Dass P. (2000) ‘Improving students’ self-efficacy in strategic management: the relative impact of cases and simulations’ Simulation and Gaming vol 31 no.1 p.22-41.

Thorne J. (1992) ‘New Stimulus for those simulations’ Management Today.

Fripp J. (1994) ‘Why Use Business Simulations?’ Executive Development Vol 7 no.1 p. 29- 32.

Harris P.R. (1993) ‘Team development for European Organisations’ European Business Review Vol 93 no.4 p.3-11 got from British Lib.

Wellington W.J., Faria A.J. (1991) ‘An investigation of the relationship between simulation play, performance level and recency of play on exam scores’ Developments in Business Simulation & Experiential Exercises Vol 18 p.111-115.

Gospinath C. and Sawyer J.E. (1999) ‘Exploring the learning from an enterprise simulation’ Journal of Management Development Vol. 18 no.5 p.1-11.

Wheatley W.J., Hornaday R.W. and Hunt T.J. (1988) ‘Developing strategic management goal setting skills’ Simulations and Games’ no 19 p. 173-185.

Handy C. (1993) Understanding Organisations Fourth ed. Penguin: London.

Fiedler Fred E. (1996) ‘Research on Leadership Selection and Training: One View of the Future’ Administrative Science Quarterly Vol. 41, no.2, P241-250.

Washbush, J., Gosen J. (1998) Total simulation performance and participant learning’ Journal of Workplace learning Vol 10 no,6/7 p 314-319.

Cheetham G., Chivers G. (2001) ‘How professionals learn in practice: an investigation of informal learning amongst people working in professions’ Journal of European Industrial Training vol 25 no 5 p.247-282.

Smith E., Boyer M. (1996) ‘Designing In-Class Simulations’ Political Science and Politics vol. 29 No.4 p. 690-694.

Fowlkes J.E., Dwyer D.J., Oser R.L., et al. Event-based approach to training (EBAT). Int J Aviat Psychol 8:209–221. 1998.

Salas E., Bowers C.A., and Rhodenizer L.(1998) ‘It is not how much you have but how you use it: toward a rational use of simulation to support aviation training’. International Journal Psychology , p.197–208.

Salas E., et al. (2001) ‘Team training in the skies: does crew resource management (CRM) training work?’ Human Factor, p. 43:641–674.

Jacobs J.W., Dempsey J.V. (1993) ‘Simulation and gaming: Fidelity, feedback, and motivation’ Dempsey J.V., Sales G.C. (ed.) Interactive Instruction and Feedback, Educational Technology Publications: Englewood Cliffs, N.J.

Howard S.K., et al. (1992) Anesthesia crisis resource management training: teaching anesthesiologists to handle critical incidents. Aviation Space Environment, p. 63:763–770.

Salas E., et al. (1999) Training in organizations: Myths, misconceptions, and mistaken assumptions, Ferris G. (ed.): Personnel and Human Resources Management, vol. 17. Greenwich, CT: JAI Press, pp. 123–161.

Salas E., Cannon-Bowers J.A. (2001) The science of training: A decade of progress. Annual Review Psychology, p.:471–499.

Oser R.L., et al. (1999) Enhancing human performance in technology-rich environments: Guidelines for scenario-based training. Salas E. (ed.) Human/Technology Interaction in Complex Systems, vol. 9, CT: JAI Press: Greenwich.

Beaubien J.M., Baker D.P.(2004) The use of simulation for training teamwork skills in health care: How low can you go? Quality Safety Health Care, p. 51–56.

Carol R. P, Salas E. and Janis A. Cannon-Bowers (2000) ‘Teamwork in multi-person systems: a review and analysis’ Ergonomics. Web.

Levitt H. J. (1975) ‘Suppose We Took Groups Seriously’, Man and Work in society, Western Electric Co., AT&T.

Rouse, W. B., & Morris, N. M. (1986). ‘On looking into the black box: Prospects and limits in the search for mental models’ Psychological Bulletin, p. 349-363.

Rumelhart D. E. and Ortony A. (1977) “The representation of knowledge in memory.” In Anderson, R. C., Spiro, and Montague.

Gentner, D., & Stevens, A. L. (Eds.). (1983). Mental Models. Hillsdale, NJ: Lawrence Erlbaum Associates.

Witte, R.S. & Witte, J.S. (2007). Statistics (8th ed.). Ft. Worth, TX: Harcourt Brace Jovanovich.

Sisson, D.A., & Stocker, H.R. (1989). Analyzing and interpreting Likert-type survey data. The Delta Pi Epsilon Journal, 31(2), p. 81-85.

More related papers Related Essay Examples
Cite This paper
You're welcome to use this sample in your assignment. Be sure to cite it correctly

Reference

IvyPanda. (2022, July 10). Business Simulation Exercise's Effect on Teamwork. https://ivypanda.com/essays/business-simulation-exercises-effect-on-teamwork/

Work Cited

"Business Simulation Exercise's Effect on Teamwork." IvyPanda, 10 July 2022, ivypanda.com/essays/business-simulation-exercises-effect-on-teamwork/.

References

IvyPanda. (2022) 'Business Simulation Exercise's Effect on Teamwork'. 10 July.

References

IvyPanda. 2022. "Business Simulation Exercise's Effect on Teamwork." July 10, 2022. https://ivypanda.com/essays/business-simulation-exercises-effect-on-teamwork/.

1. IvyPanda. "Business Simulation Exercise's Effect on Teamwork." July 10, 2022. https://ivypanda.com/essays/business-simulation-exercises-effect-on-teamwork/.


Bibliography


IvyPanda. "Business Simulation Exercise's Effect on Teamwork." July 10, 2022. https://ivypanda.com/essays/business-simulation-exercises-effect-on-teamwork/.

If, for any reason, you believe that this content should not be published on our website, please request its removal.
Updated:
This academic paper example has been carefully picked, checked and refined by our editorial team.
No AI was involved: only quilified experts contributed.
You are free to use it for the following purposes:
  • To find inspiration for your paper and overcome writer’s block
  • As a source of information (ensure proper referencing)
  • As a template for you assignment
1 / 1