An actuary’s guide: avoid the pitfalls of data-driven pricing
Insurers are coming under increasing pressure to develop new ways of pricing. Fortunately the availability of high volumes of data can provide a solution, but there are still risks. Here are the pitfalls of shifting to a data-driven pricing model, and how to avoid them.
Pressure on insurers to adapt their pricing model is building thanks to a number of factors — financial tension, regulatory interest and 2018’s super-complaint from Citizens Advice about loyalty penalty. And not least; consumers are calling for a different system, too.
Paul Ridge, head of insurance at SAS UK & Ireland, says 12-month policies are rapidly becoming out-of-date. “The new demographics, millennials and generation Z are less likely to own a car or a home so they want a different insurance proposition,” he explains. “Insurers have to consider how they should adapt to deliver the products their customers want and remain competitive in this market.”
Advances in technology and the amount of readily-available data give insurers a means to respond to these pressures. As well as tapping into other data sources, the Internet of Things will provide a constant stream of data that could be used to enhance the way insurers operate.
While much of this data is irrelevant, insurers are already able to use more information to gain insight into risk. For example, rather than ask a property owner how far their building is from a river, this information can be gleaned from a satellite image of the area. Sounds great, doesn’t it? But data-driven pricing is not without its drawbacks.
Here are the main challenges, and how to avoid them:
1. Data use can be inappropriate
Insurers, beware — the inappropriate use of data can cause serious reputational issues. There’s a thin line between using data to benefit the customer and being seen as snooping.
Recently an insurer was forced to stop using open data from Facebook to price its business. Consumers rarely consider how open this data is but, while they’re happy for insurers to use it to catch fraudsters, they still deem it private information.
The introduction of the General Data Protection Regulation (GDPR), meanwhile, has added to this reputational risk too. The regulation has made customers more savvy to how their data is being used.
Insurers must, therefore, have robust processes in place around how they manage their existing and potential customers’ data. This will force them to be more transparent about what data they hold and what it is used for.
Using open source technologies, for example, can make these processes harder to justify. Using an analytics platform like SAS that’s tried and tested by insurers, however, is a far safer way to outline these data processes due to in built governance tools meaning that audit can be based around the process used to create models as opposed to going through lines of code.
2. Data can generate bias
There is a danger that an external data set could unintentionally push an insurer to make biased assumptions.
Data including consumer occupation can cause complexities. For example, stating one’s occupation as ‘housewife’ is a proxy for gender; a bias that is not permitted in pricing. Whilst this is an obvious example, sometimes these occupational correlations — like engineer, nurse, mechanic — may be more subtle.
To avoid this, insurers should ensure all data they use is validated. They should also put in place strong robust governance and scenario testing. Working with a third party data expert can make sure your use of data is wholly appropriate.
3. Data isn’t always clean
Even perfect data decays at an incredible rate, not to mention the errors obtained through erroneous data entry. Unfortunately, as insurers haven’t always been data-driven, their processes don’t guarantee clean data from the outset. As an example, Kate Wells, managing director of Azur Underwriting, points to insurers’ data on properties. “The date of construction wasn’t always a rating factor so insurers sometimes went for a blanket data such as 1900. This will need to be cleaned up.”
To guard against dirty data, insurers must complete regular and through auditing of their internal data. Third-party technical consultancies like Demarq work with actuaries and insurance businesses to ensure their data management processes are up-to-date and reliable.
4. Information can be disjointed
As well as open data and information already held by insurers, it’s possible to buy proprietary data sets. This can lead to multiple data streams flowing in from countless sources.
Even when this data is clean, insurers can face challenges gaining tangible insights from data when it’s disjointed. Historically, systems were built up around financial reporting requirements, leaving insurers with data silos for claims, underwriting and so on. Bringing this disparate data together can be hard, especially integrating it with unstructured data.
Having the right platforms, therefore, will be essential to consolidate and successfully analyse this data. SAS’ insurance analytics software, for example, can unify data streams to vastly improve their value.
5. Algorithms can deteriorate over time
This oversight must go beyond setting up the algorithm, too. Algorithms are only right at a set point and can degrade over time or if they are presented with a different population. Insurers need to monitor any model they write and ensure that it remains performant but if using machine learning techniques to optimise the model over time, this must be transparent to regulators.
As well as ensuring they stay within the FCA rules, this approach is also important for keeping the Information Commissioner’s Office on side. Article 22 of the GDPR gives specific guidance on automated decision-making and profiling. The insurer can’t simply abdicate responsibility: it retains accountability throughout and needs to understand what’s happening with its technology.
The bottom line — always ensure you have some level of human intervention. To make this easier, consider sourcing the help of a consultancy like Demarq who works with insurers to keep their data-driven processes functioning accurately.
6. Data-driven pricing needs expertise
Today, in a new world of data, pricing teams are evolving. It used to be the domain of the actuary but they’ve now been joined by data scientists and data engineers.
Whilst the relationship between these disciplines was initially uneasy, it has become friendlier in the last couple of years. But even though these skills have been accepted within pricing, insurers still face challenges attracting the right talent. Data scientists and data engineers are in great demand and insurers will find themselves competing against other, more innovative sectors for this talent. Working with a third party technical consultancy, meanwhile, is cheaper than hiring external data experts to write sophisticated AI programmes, and less risky than competing for new hires.
If you do plan to hire data experts, then also consider ways of creating the right business culture. By adopting a forward-thinking approach to AI with help from an external consultancy, you’ll attract the best data experts in the future. Not only that, modernising your brand overall will make sure that existing employees feel part of the mix and boost overall company morale, too.
Our advice? Don’t start out alone. Seek the help of a trusted partner like Demarq to implement a data platform like SAS that’s tailored to insurers. Including this as part of your digital transformation will go a long way towards adapting your business and modernising your technologies for the future.