• Corporate
  • Sep 16, 2021
  • By QOMPLX

How much automation?

How much automation?

Underwriting: An art and a science

Resistance from the front line has been a key reason the insurance sector has lagged behind others in the drive to adopt advanced technological tools. Underwriters naturally believe that their job involves a degree of art as well as science, and therefore cannot be carried out by an algorithm. This belief is certainly understandable, not least because it’s true in a great many cases. Yet the benefits of automation are attractive both to management and to underwriters. The challenge is to find the correct level of automation to ensure that risk carriers reap the benefits of the science and the art.

The gains achievable through the automation of pricing decisions are clear and tangible. They include cost savings, which can be substantial. A great many underwriting decisions, even in the speciality insurance arena, simply require no artistic input. High volume, low severity small commercial risks usually fall into this category. When sufficient data fuels the system, weighting for challenges such as weather-related catastrophe can be included without the underwriter’s particular acumen. Increasingly, mid-sized businesses fall into this category, too.

When the pricing of these vanilla risks is removed from underwriters’ agendas, their costly time and expertise are released to focus on the outliers, those more difficult cases where artistry and instinct are required alongside data and math. Identifying those cases – sieving the wheat from the chaff – is another area where technology can help. In essence, the system has sufficient data to distinguish between the risks it can handle on its own, and those it has to refer to an underwriter to get the job done well.

Data: A helping hand

Several data-related variables sit alongside risk complexity to define the dividing lines between algorithmic pricing and human underwriting. Data density, quality, and provenance are critical. An old computing maxim promised GIGO: “garbage in, garbage out.” To ensure the quality of automated pricing decisions, data must be sufficiently robust. It must include sufficient comparable cases presented usefully with adequate granularity, and come from trusted, traceable sources. Finally, it must be compatible with pricing engines in formats they understand.

Curated and fine-tuned data is valuable to underwriters in more ways than one. Reliable, accessible data is a powerful tool to inform manual underwriting decisions. It allows them to assess individual risks with all the essential knowledge right to hand. That makes their jobs easier by eliminating routine tasks. Alongside sophisticated processing capabilities, such data has multiple additional value-generative applications. It can be used, for example, to identify product development opportunities based on trends spotted in millions of otherwise-unintelligible data points.

Automated risk pricing is a relatively simple process, and almost identical to the steps an underwriter follows. Simple rules help to determine if a submission can be priced without human intervention. If a specific risk ticks a specified set of boxes, then an automated quote can be issued. If it ticks none, it will probably be rejected automatically.

If only some boxes are ticked, the submission will be referred for underwriting. Since about four out of five submissions risk carriers elicit an immediate “yes” or “no,” the criteria that underwriters use, their own “simple rules,” can be automated. That leaves those highly skilled individuals free to work on the remaining 20%, the marginal cases which may be worth pursuing and yield the highest returns. Few underwriters are resistant to that sort of practical, efficient support.

Automation: Maximizing benefits

A cohesive central data repository is clearly essential to the automation process. It is also valuable in many other areas of the risk-carrying business, ranging from regulatory reporting to policy administration, exposure management, and claims administration. The trick is to bring all the data together, stack it neatly according to a plan, and make sure it is clean and consumable. Only then can the priceless intelligence trapped in the vast reams of numbers be released to create enormous value. To maximise those benefits, the system deployed must work equally well for all parts of the business, not just underwriting.

In our blog series, “Creating Value Through Insurance Data Infrastructure” we look at how companies strive to extract value from data.  Progress is being made as the industry matures and ultimately finds ways to manage big data sets and enable the necessary analytics to make the right decisions.

You might also be interested in

Empowering enterprises to stay ahead of evolving threats

Empowering enterprises to stay ahead of evolving threats

QOMPLX recently joined the IBM Security App Exchange. Here’s why the integration will take your security to the next level.

Read more
Identify and Fight the Phish #CyberMonth

Identify and Fight the Phish #CyberMonth

Phishing attacks are an easy way for a bad actor to gain access to a network. Once inside, they can cause devastating losses.

Read more
QOMPLX Releases the Arkscrape Community Edition: open source internet archiving for investigators and researchers

QOMPLX Releases the Arkscrape Community Edition: open source internet archiving for investigators and researchers

QOMPLX’s Arkscrape tool easily archives web pages, making it ideal for research and investigations in complex cases such as human trafficking by journalists, academics, law enforcement, and other researchers.

Read more
Request a Demo

Interested in learning more?

Subscribe today to stay informed and get regular updates from QOMPLX.