Introduction
The purpose of this paper is to critically analyze and evaluate various publications that cover different topics in system thinking. The topics include the challenges in studying complex systems, qualitative mapping, quantitative models, addressing system failures, and the importance of defining the success of IT projects. The analysis shall be based on the perspectives of various scholars concerning these topics.
Complex Systems: The Challenges
Scientists and social researchers face significant challenges when analyzing the behavior of complex systems. Science involves obtaining meaningful insights into the changes and relationships between the elements of a system. This leads to several challenges, which include collecting data in large-scale experiments, moving from data to dynamic models, cause-effect relationships, the relationship between comprehensive and simple models, and developing new methods of prediction.
At the beginning of modern science, scientists studied complex systems by analyzing simple phenomena and physics. The reductionist science was used to understand the complex systems in the society. The advancements in modern science and the society have motivated researchers to understand the principles of complexity and the phenomena associated with complex systems. This involves focusing on synthesis rather than analysis. Thus, FuturICT has been developed to promote synthesis in the study of complex systems. In the social sciences, statistical laws and techniques are used to study complex systems. This trend has been promoted by the emergence of new large databases, new complex social phenomena, and development of simplified models.
In order to overcome the challenges of the science of complex systems, researchers should use the following criteria. To begin with, researchers must have adequate empirical data to explore and to understand the features of complex systems. Accurate and complete data is also necessary when calibrating and validating the models that are used in empirical studies. Complex systems can be studied systematically by following a five-step approach. The first step involves making observations, explorations, and collecting data. The second step involves determining correlations, patterns, and mechanisms in the data. In the third step, modeling techniques are developed to study a phenomenon. In the fourth step, the model is validated, implemented, and used for predictions. Finally, the output of the model is used to construct theory.
The reliability of the findings of studies that focus on complex systems depends on modeling techniques, effective aggregation of information, and ability to understand extreme events. In this regard, simple models have been found to be effective since they lead to a better understanding of the stylized facts about a complex system. Simple models have better forecasting power than complicated ones because of their clarity and tractability. Modeling is often challenging due to limited access to high quality data, use of ineffective mathematical or statistical techniques, and emergence of phenomena that are not reducible.
In addition, selecting the right information to use in modeling is challenging because complex systems are characterized with dynamical properties that are linked to the topology of the network of the relationships among their parts. Effective selection and aggregation of information during system analysis can be achieved through social and individual learning. When studying socio-technical systems, researchers should use theories and methods drawn from the social sciences since ICT networks are comprised of interacting humans. In order to overcome the challenges of collecting data, technologies such as participatory sensing and social computing should be used. Moreover, extreme events can be understood through extreme value and self-organized criticality analyses.
Qualitative Maps Verses Quantitative Models/ Simulation
Qualitative Maps
Given the challenges associated with understanding complexity, scientists are yet to agree on the best approach or technique to use when studying complex systems. Generally, some scholars believe that qualitative maps should be used, whereas others consider quantified models to be the most appropriate techniques for studying complex systems. Existing literature presents mixed findings on the use of qualitative and quantitative models to understand the dynamics of various complex systems. Research conducted by Wolstenholme and Coyle indicated that quantitative models were effective in analyzing system dynamics.
Lane supported this view by demonstrating that studying system dynamics without using quantified simulations leads to contradicting findings. Similarly, Richardson argued that quantitative models are superior to their qualitative counterparts. The use of qualitative maps or models, on the other hand, has been supported by researchers who believe that quantitative models have limited ability to explain complex systems.
The use of qualitative maps is mainly justified by the fact that quantitative models are characterized with uncertainties that often lead to misleading conclusions. Uncertainties arise from the use of soft variables, which are often difficult to formulate as equations. Moreover, data for soft variables such as customer satisfaction are not readily available. In this regard, qualitative maps such as causal-loop diagrams are considered to be effective when studying complex systems that are characterized with uncertainties. Specifically, qualitative maps should be used without being supplemented by simulations since they provide insights into the problem being studied through inference rather than calculations.
The use of soft variables in quantitative models leads to generation of parameters whose meanings are uncertain. An effective system dynamics model must use variables that correspond to the real-system variables. In this regard, the decision functions used in modeling have to represent the social factors, concepts, and sources of information that influence the actual decisions. When quantitative models fail to meet these criteria, their output is considered to have no value. For instance, empirical studies have found that quantitative models cannot explain the collapse of the Maya Civilization, whereas qualitative models have provided valuable insights into the collapse. Thus, qualitative models can be used to describe the dynamics of a complex system.
Qualitative models such as influence diagrams enhance our understanding of system dynamics in the following ways. First, they provide a summarized description of the dynamics of the system, thereby enhancing the researcher’s understanding of the causes and effects of the problem. Second, the models provide a clear relationship between the components of the system and the stakeholders who are involved.
This facilitates effective articulation of the problem to identify the most appropriate solution. Third, qualitative models can provide insights that facilitate understanding of the behavior of complex systems. Fourth, a qualitative model provides a holistic view of the nature of the system that is being studied. This enables scientists to understand the nature of the changes in a system in order to develop an appropriate model to analyze it. Finally, qualitative maps enable researchers to develop quantitative models easily.
Quantitative Models/ Simulations
Although quantitative models are associated with uncertainties, they are considered to be effective in the study of complex systems. This perspective developed as a result of the popular belief that system dynamics could not be understood by making inferences from causal-loop diagrams. Thus, quantitative models were considered as the only reliable means of studying complex systems and conducting policy analysis.
In order to demonstrate the credibility of quantified models, several researchers including Homer and Oliva have reviewed the importance of simulation models in the process of analyzing the dynamics of complex systems. Simulation has been found to have great importance in the process of analyzing complex systems. This perspective is based on the fact that reliable inferences cannot be drawn from complex causal maps in the absence of simulation. Furthermore, system dynamics can be used to address the challenges attributed to soft variables and incomplete data.
Simulation using quantitative models is always important in dynamic analysis due to the following reasons. To begin with, a model can only be used as a reliable solution to a problem if its superiority to other alternatives can be demonstrated. Simulation not only helps in demonstrating the superiority of the selected technique or model, but also identifies the core structure of the problem. The uncertainties associated with simulation can be dealt with in two ways. First, mental databases have adequate information that is required during modeling. Thus, they can be used to address the challenges associated with limited access to numerical data in order to avoid generating parameters whose meanings are uncertain. Second, sensitivity testing can be used to demonstrate that behavior models and inferences are independent of the uncertainty of parameter values. Thus, reliable conclusions can still be made using quantitative models.
Several conclusions can be made concerning the choice between qualitative and quantitative models. To begin with, qualitative models are always used in the first two stages of research work. In the first stage, diagrams are used to describe the dynamics of the system or the problem being studied. In the second stage, the diagram is analyzed to establish the connection between the elements of the system. Thus, quantitative models become relevant only from the third stage as the researcher seeks to gain deeper insights into the problem or data. Simulation often provides important insights into the nature of complex systems, especially, in policy analysis.
In addition, simulation is important because it helps in determining whether qualitative maps are misleading or not. This implies that quantitative models are still important despite the uncertainties associated with them. Overall, qualitative models have their weaknesses, which limit their ability to explain the dynamics of a complex system. However, the quantitative models are not always superior to qualitative models. This conclusion is based on the fact that quantitative models also have weaknesses that limit their application in the study of the dynamics of complex systems. Thus, the study of complex systems, especially, policy analysis can be improved by using both qualitative and quantitative techniques to obtain meaningful findings.
Effective System Dynamic Modeling: Resistance to Policy
The effectiveness of dynamic modeling can be illustrated by the society’s reactions to its output. Resistance to policy is one of the outcomes of poor modeling of dynamic systems. The problems associated with policy formulation call for the implementation of three strategies to avoid failure. First, the policy maker has to understand the complex systems in the society in a holistic manner. Second, the major causes of policy resistance must be analyzed and understood clearly. Finally, appropriate methods must be adopted to formulate policies that produce sustainable benefits in order to avoid resistance. Developing an effective policy is often difficult because every solution to a problem has a side effect. Thus, a policy that has the potential to address the problem for which it was formulated to solve can still be resisted if it produces undesirable side effects.
Traditionally, system thinking has been identified as a viable solution to the problem of resistance to policy. System thinking is a problem solving technique in which the world is perceived as a complex system. It enables scholars to view the world in a holistic manner. This improves researchers’ ability to learn quickly and effectively. As a result, the researchers are able to identify the weaknesses in complex systems that have to be addressed to avoid policy resistance. In addition, a systematic view of the world enable researchers and policy makers to make decisions that lead to achievement of individual interests and the interest of the whole society.
The problem of policy resistance is mainly attributed to the limited ability of most researchers to comprehend complexity. Specifically, policy resistance is caused by our inability to understand complex systems. People fail to understand the effects of their decisions because their mental models are limited, inconsistent, and unreliable. As a result, people make decisions that are meaningful in the short-term, but harmful in the long-term.
Policy resistance also arises because we perceive experiences as a series of events in which one event causes another. However, the real world provides feedback by responding to policy actions. This leads to policy resistance because of inadequate understanding of the feedback generated by complex systems. Policy resistance also occurs due to the fact that the effects of decisions can take a prolonged period before they are felt in the society. This delay creates difficulties in studying the causal relationships that are associated with a complex system.
The problem of policy resistance can be addressed by using tools that facilitate understanding of the causes of dynamic complexity. These tools are qualitative maps that show causal relationships and quantitative modeling. The tools facilitate understanding of the feedback in dynamic systems that can make policy decisions to be irrelevant and ineffective.
Addressing System Failures
Information and communication (ICT) systems play a vital role in the study or analysis of complex systems. Specifically, they facilitate important processes such as data collection and modeling. However, effective application of ICT is limited due to recurrent system failures. Generally, similar system failures often occur regularly in organizations. In addition, the quick fixes used to solve system failures have side-effects. For instance, most organizations use safety redundant mechanisms to respond to system failures. However, this technique is ineffective because it does not reduce or eliminate human error.
Researchers have developed various methods that are currently being used to analyze system failures. These include the failure mode effect analysis (FMEA) and the fault tree analysis (FTA). The FMEA methodology focuses on analyzing single-point failures. It uses a bottom-up approach to analyze the causes of the failure and the results are usually presented in tables. On the other hand, the FTA method uses a top-down approach to analyze several failures simultaneously. The results of the analyses are usually presented in the form of logic diagrams.
The total system intervention (TSI) has been introduced as an advanced method for preventing the occurrence of system failures. TSI enables scientists to manage complex and varied viewpoints. In this regard, the system of systems failure (SOSF) technique can be used by different stakeholders as a shared system of communication to facilitate effective understanding of failures. TSI helps organizations to identify the stakeholders who are involved in a particular system failure. This involves using a matrix to outline the factors that are considered by various stakeholders to have caused the failure. Thus, it is possible to understand the viewpoints of various stakeholders concerning the failure.
There are two approaches that can be used to analyze specific types of system failures. These include the failure factor structuring methodology (FFSM) and the system failure dynamic model (SFDM). FFSM addresses the system failures that are attributed to complex failure factors. FFSM addresses system failures by facilitating double-loop learning. SFDM, on the other hand, overcomes the system failures that occur due to environmental changes. It ensures that systems are operating optimally, and the side-effects of quick fixes are minimized.
System failures can be addressed by taking the following actions. First, the perception gaps among stakeholders should be closed so that the adopted solution reflects the stakeholders’ needs. Second, the key performance indicators should be linked to absolute goals. This ensures that the solutions adopted to address system failures achieve the predetermined objectives. Finally, the boundary of the existing systems should be defined clearly in order to identify the improvements that should be made without introducing undesirable side-effects.
Success of IT Projects
One of the major challenges that organizations face when implementing IT projects is how to define and measure success. Defining and measuring success and failure is problematic because they are understood differently by different people. This problem is exacerbated by the fact that there is no universally accepted definition of failure and success. Generally, some scholars argue that success is realized if the stakeholders of an IT project perceive it to be successful. In addition, some scholars associate success with the survival of the IT project. Lack of a clear definition of success and failure presents significant challenges to IT project management because the projects are often initiated without a clear declaration of what will be considered as success.
Several scholars have provided different definitions of success of IT projects and the criteria to measure it. Cooke-Davies asserted that project success is different from project management success. The later is measured by three variables namely, time, cost, and quality. The former is measured in terms of the extent to which the overall objectives of the IT project were achieved. The criteria provided by DeLone and Mclean to measure success are based on six variables. These include the quality of the system, quality of the service provided, information quality, net benefits, use of the IT system, and the satisfaction of the end-user. These variables often provide inadequate measurement of success.
For instance, satisfaction of the user is considered to be an inappropriate measure since it lacks theoretical underpinning. In addition, frequent use of a system is often not considered as an indicator of success in the context of IT projects such as data warehousing.
Three practices enhance the success of IT projects. These include having a formal definition of success, consistent measurement of success, and using the results of the measurement to improve performance. Empirical studies show that there is no single method of measuring success that is better than others. Companies that effectively define and measure success utilize a variety of success indicators, which include timeliness, cost-effectiveness, business continuity, ease of use, and satisfaction.
Moreover, distinguishing between project management success and business success facilitates effective definition and measurement of success in the context of IT projects. Companies that succeed in achieving the expected project outcomes usually define success prior to the implementation of their projects. The rationale of this strategy is that defining success at the beginning provides a clear vision of what has to be achieved at the end of the project.
In sum, three factors namely, an accepted definition of success, continuous measurement of progress, and utilizing the measurement results to ensure improvements determine the success of IT projects. Thus, companies can improve the outcomes of their IT projects if they know what they need, monitor their progress, and make changes when necessary.
Conclusion
The science of complex systems is associated with numerous challenges, which include difficulty in data collection and modeling. Qualitative and quantitative models have both strengths and weaknesses. Thus, they can be used together to improve the findings of studies that focus on complex systems. ICT plays an integral role in the study of complex systems by facilitating modeling and analysis. Thus, appropriate measures should be taken to prevent ICT system failures and to improve the success of IT projects.