Two major approaches to risk assessment (RA) exist: qualitative and quantitative (Kliem & Richie, 2015). The qualitative method implies evaluation of risks, their interactions as well as assessment of their probability and impact. One of the major goals and benefits of the approach is the possibility to prioritize risks. Quantitative approach implies weighting value of risks and their interactions as well as assessment of outcomes that are manifested in “financial terminology” (Kliem & Richie, 2015, p. 82). This enables managers to evaluate possible contingencies to the budget or time frames. Quantitative analysis is often more detailed, and it is more appropriate when the estimation of risk’s overall effects on the project is necessary.
- The organization should undertake a number of steps to ensure RA is appropriate. The first step is to identify threats and hazards to critical operations (Engelmann & Henderson, 2011). Managers should focus on the most relevant risks. After that, it is necessary to determine and evaluate controls. Controls are procedures that minimize the probability of transitioning through the entire chain of the risk event aimed at mitigating the risk impact. The next step is to evaluate event probabilities and the potential impact. It is also essential to identify a risk measure through a combination of impact and probability. The final stage is prioritizing risks.
- Managers can estimate probability with the help of qualitative or quantitative analysis. In terms of the former, the probability valuation can be high, medium, low. This type of analysis is instrumental in prioritizing. Whereas, quantitative analysis implies the use of numerical probabilities (Graham & Kaye, 2015). The major sources of data for both types of analysis include questionnaires, interviews, documents, observation, workshops, internal audit. It is necessary to add that the sources of data can be internal and external. The former include the staff, auditing, management, contractors and so on. The external data involve police officials, weather bureau, fire officials and so on. Estimation of probability enables managers to develop cost-effective strategies that mitigate the aftermaths of risks or disrupt the event chain.
- Probability statistics is vital for the RA process as it enables managers to collect, describe, analyze various data. Statistical techniques help managers to estimate risks probability. Statistics may help estimate particular costs and timeframes that can help mitigate risk impact through disruption of the event chain (Young & Coleman, 2010). The major value of statistics is the fact that the technique provides a set of particular tools to evaluate risk probability in numerical terms. These data can be expressed in a graphical form, which is helpful for prioritizing and evaluation of the risk impact. Statistical tools make the process of decision-making easier and more effective as it is evidence-based.
- The Bayes’ theory helps evaluate event probabilities when new data are available. Thus, prior probability can be estimated on the basis of available information. It can be estimated with the use of outside resources (experience of other companies). However, when new data are available, the theorem enables to include them into the calculation and estimate the probability in a more precise way. The more data appear, the more precise results can be obtained. At the same time, there is no need to try to get all the possible information at once as there is a possibility to estimate the event probability each time new data are available.
- The mean value is relevant to BCM as the calculation of the mean is important for measuring central tendency. The calculation of the mean value enables managers to estimate the probability of risk events with a significant degree of precision. The mean value is also instrumental in the evaluation and prioritizing risks. It is possible to note that the availability of new data is not crucial as the manager can evaluate the probability of risk events. The mean value can also be used when some tendencies are estimated. When precise data are important, the mean can be regarded as a standard to follow or a trend to avoid.
- According to the empirical rule, almost 70% of items are “within one standard deviation of the mean” (Engelmann & Henderson, 2011, p. 179). Around 95% of items are “within two standard deviations of the mean” while more than 99% are “within three standard deviations of the mean” (Engelmann & Henderson, 2011, p. 179). The empirical rule can be applied to the data that have a bell-shaped distribution. The empirical rule is applicable in BCM, and it helps managers to evaluate and prioritize risks. This is vital for the development of cost-effective strategies aimed at mitigation of aftermaths of risks or their avoidance.
- To measure multiple and simultaneous risk events, it is possible to apply a number of statistical techniques. The addition rule can be applied to evaluate multiple events. It ensures that risk events and their aftermaths are not taken into account several times. The permutation can be employed when simultaneous risk events occur (Engelmann & Henderson, 2011). The combination can be utilized to measure multiple risk events.
Reference List
Engelmann, K.J., & Henderson, D.M. (2011). Business continuity and risk management: Essentials of organizational resilience. Brookfield, CT: Rothstein Publishing.
Graham, J., & Kaye, D. (2015). A risk management approach to business continuity: Aligning business continuity and corporate governance. Brookfield, CT: Rothstein Publishing.
Kliem, R.L., & Richie, G.D. (2015). Business continuity planning: A project management approach. Boca Raton, FL: CRC Press.
Young, B., & Coleman, R. (2010). Operational risk assessment: The commercial imperative of a more forensic and transparent approach. Hoboken, NJ: John Wiley & Sons.