Insurance companies are some of the biggest investors in AI development, in recent years. As it stands, the companies perceive AI as a convenient way of gathering and presenting information about a customer for the employee to make decisions about their customers, facilitate quicker responses to inquiries, and make better judgments on claims (Neapolitan, 2018). At the same time, critics of the approach point out that further AI-centric system will lack the human touch and make poor decisions that will harm vulnerable groups of people. The purpose of this paper is to evaluate how AI can be used to mitigate risk and how can it be managed to benefit or hurt consumers based on the type of data acquired.
AI and Risk Mitigation
One of the primary ways for AI to be used in insurance is understanding risk. Underwriters need to see the information about a client to properly assess the risks they face, if they are to give them the product and the price satisfactory to all. AI can provide ratings on a person in over 250 categories, and the way they interact with each other is difficult for even a human to comprehend (Neapolitan, 2018). Learning machines can do so, resulting in saving time and more accurate predictions , which would be beneficial to insurance businesses. Claims control is another important aspect of risk mitigation. This particular area of insurance is known for its poor and erroneous decision-making process. AI can make it simpler and less biased by extracting between 50 to 100 data points from a correctly filled-out claim alone, and support the decision-maker with its own analysis (Kautz & Singla, 2016). Therefore, AI has a great potential in risk mitigation and improving the overall efficiency of claims.
AI Used to Benefit or Hurt Clients
One of the prominent issues with AI and learning algorithms is that they are lacking the cultural context of the system they operate in, and are overly reliant on the data provided to them, lacking outside human experiences. For example, an algorithm made utilizing financial data would likely discriminate against minorities (Larrañaga & Moral, 2011). Many blacks and Hispanics do not have a steady or even complete history of employment, having been doing odd jobs or being employed off the books by companies. Their home situation is even less stable, either not possessing a permanent residency or having their occupancy being mortgaged or something similar.
In these situations, the decisions AI would make would not contribute to improving the situation for these people and would not better the society as a whole. Instead, these people would be valued as high-risk and receive worse offers, by the AI, if it applies the same measure of worth to a client, consistently (Neapolitan, 2018). This approach would likely benefit the white population more, due to them having accumulated more wealth over their history, and having, in general, more stable credit scores. A person aware of the social situation in their respective country can make an educated choice and not discriminate against minorities (Neapolitan, 2018). It is something a computer would not be able to do without it being hardwired into the system. Doing so, on the other hand, would diminish the AI’s ability to learn and make decisions.
Conclusion
AI can be a useful tool for providing and analyzing data. It can help mitigate risks and provide for a quicker decision-making process. At the same time, it is not a perfect system. There is a potential of discrimination and an overreliance on the quality of data. Full automation of insurance claims and underwriters is not advisable at this time.
References
Neapolitan, R.E. (2018). Artificial intelligence (2nd ed.). Chapman and Hall.
Larrañaga, & Moral. (2011). Probabilistic graphical models in artificial intelligence. Applied Soft Computing Journal, 11(2), 1511-1528.
Kautz, H., & Singla, P. (2016). Combining logic and probability. Communications of the ACM, 59(7), 106.