• Data Analytics
  • Nov 3, 2021
  • By QOMPLX

Turning the data you have into the data you want

Turning the data you have into the data you want

Having data is no great challenge. Most insurance companies have plenty, and while they could perhaps use more–most could–it needs to be usable otherwise it serves no value. The challenges faced are:

  • Collating existing and new data into manageable, comparable datasets where the sources and cleansing methods are traceable
  • Creating mechanisms which allow almost any conceivable data queries to be made swiftly and easily to support decision-making

There are many challenges though, all of which are plaguing the insurance industry daily. How can the data be delivered easily to multiple users in diverse roles across the organization or even across the entire insurance value chain? How do you create an infrastructure with tools that yield flexible and customized output which provide the granularity necessary for specific end-users to consume and use? How do you organize the data in ways that do not require highly skilled– valuable–resources to maintain and re-format as new requirements emerge?

The answers lie underneath the surface and depend on the ability to create an underlying data fabric to support the rapid and flexible exploitation of the data.

Weaving the data fabric

A ‘Unified Data Fabric’ is at the heart of any organization’s data resource and infrastructure. It comprises:

  • The company’s own data such as from underwriting, accounting, and policy administration systems,
  • data created and delivered by market participants such as brokers, insurers and MGAs, and, finally,
  • selected third-party data for extended insight capabilities

The Unified Data Fabric starts with ‘ingestion’, a highly automated process that compiles a core pool of data from all relevant data sources available. The initial steps need to create a reliable means to automatically acquire data, understand it, then store it for future use.

Any data from anywhere

Flexibility becomes key with the ability to receive data of almost any form and format: structured and unstructured data. For the technically minded, also different database types, (e.g. relational, time series, columnar, graph) and different representation and encoding (e.g. flat files, database files, xml, json, pdf, Word, CSV). It needs to be able to be imported via connectors to common applications, databases, file transfer technologies and Application Program Interfaces (API). Regardless of the source, data must be brought into the Fabric through a low-code or no-code process, with mapping that can define source and storage structure. The data is then held in comparable–although not necessarily identical–formats, which allows for relatively straightforward whole-fabric queries.

Data virtualization

Where data duplication or the cost of importing and storing data must be avoided, or real-time data performance assured, the Unified Data Fabric must allow for data to be fully registered as part of the fabric without moving the data into the fabric–known as “Data Virtualization”. The data remains at its original location, but still appears and behaves just like any other data in the Unified Data Fabric.

Democratizing an organization’s data assets

The ingestion step may include initial, limited data transformation and enhancement, for example to understand discrete fields within the source, but the real data-reinvention takes place afterwards. When ingestion is complete, the data is transformed in the second step and often leveraging analytics.

The easiest way to understand this step is to think of it as a curation process. The curation goal is to create a consolidated corporate data asset that allows efficient information discovery, intelligent connections across datasets, correlation and enrichment with third-party datasets and the leveraging of Artificial Intelligence (AI). A Unified Data Fabric that incorporates advanced data engineering and is well curated will empower multiple data consumers to run analytics and visualize insights with minimal preparation or restructuring of the data with each use case.  We refer to the curation step as the DnA, data and analytics. Some have called the successful implementation of a Unified Data Fabric as the democratization of an organization’s data assets.

A Valuable and trustworthy data infrastructure

The ingestion and DnA processes accomplish much more than simply field mapping. The result is a Unified Data Fabric which can be queried to deliver valuable new information that can be reliably used for risk management decisions, pricing, modeling, and other critical uses within a (re)insurance organization. The ultimate value of the Unified Data Fabric is the empowerment of the organization to harvest superior insights, make better decisions faster, and have a competitive advantage in the market.

The possibilities are limited only by the imagination and the availability of data. An enhanced Unified Data Fabric could be used during the quotation process to calculate a property’s distance to the coast, or to assess aspects of an insured’s cyber exposure, or to understand the property characteristics to rate a commercial building. It could calculate the propensity of a fleet of vessels to sail into pirate-infested waters. The possibilities are endless if the data can be trusted and the process to make it usable is transparent and repeatable.

As raw data from disparate sources passes through the organization, it ultimately becomes a usable group of organized datasets within the Unified Data Fabric. Within a few keystrokes, that in turn becomes actionable business intelligence.

Convergence and a steady move to open standards, processes and even data mean that organizations who have superior ability to correlate data and extract unique value will be at a significant advantage. Competitive advantage is becoming less and less about simply owning data.

In this blog series “Creating Value Through Insurance Data Infrastructure'', we will discuss how to extract the usefulness from data to benefit your business without the extreme cost, rework, and grueling system replacement that is all too often the case. Data is the catalyst of the future for enhanced profitability and communication across the entire insurance value chain.

You might also be interested in

Creating value through insurance data infrastructure

Creating value through insurance data infrastructure

Interoperable solutions backed by a flexible data fabric are the key to digital transformation and multi-system interoperability. In the blog series “Creating Value Through Insurance Data Infrastructure”, we will explore the problems, decisions, and solutions to this challenge.

Read more
From Data to Intelligence: Systems Alchemy for the Insurance Sector

From Data to Intelligence: Systems Alchemy for the Insurance Sector

Major amounts of data live within insurance carriers but the challenge lies in getting it out in useful form. Learn how to extract the value from data without the need to replace your existing systems, spend thousands of hours coding or rekeying data, or commit millions to a new data architecture.

Read more
Moving Data Operations Beyond Data Lakes & Lakehouses

Moving Data Operations Beyond Data Lakes & Lakehouses

Q:OS delivers the right persistence layer through its Data Fabric, for the appropriate data model and simplifies the operational complexity of ingesting, transforming, normalizing, and schematizing data for businesses to quickly make intelligent decisions.

Read more
Request a Demo

Interested in learning more?

Subscribe today to stay informed and get regular updates from QOMPLX.