Predictive Analytics - Redlining? (Canada)

Just got off the phone with an insurance company who told me proudly that they are using predictive analytics to quick-issue policies. Two factors he highlighted were credit checks and postal code (location). He doubled down by noting that other companies are doing this as well.

So, they do a credit and location check, and based on that they determine what underwriting requirements to order. The end result, areas where some races may be predominant, and I guess may have lower creditworthiness overall, end up getting more medical requirements. More medical requirements results in more ratings and/or more denials. Which results in some races being charged more or being denied more.

Educate me here. Is this not Redlining? (where you charge people more based on location, where location is correlated by race, so you end up charging based on race, even if inadvertently).

I’m actually surprised this is legal in Canada, though i guess it is. Are the companies just open to being accused of redlining? Or maybe there’s other methods being used in conjunction with this data to remove any possibility of systemic racism?

Well it can be, but you’d have to look at the racial data and do your own analysis and then make a conclusion based on the evidence.

Also, if there’s hard evidence like company emails and phone calls explicitly saying that’s what they do.

Of course, how you even get that data is a hurdle. If they have an online quotation system, maybe you can mass apply by changing the zip code and then comparing it with demographic data and the results.

That itself could raise some ethics issues on falsely representing yourself, though.

Agree with CS. They’d still have to show correlation between the locations used and racial data. It could be redlining (I’d say probably), but might not be.

You’re saying that redlining vs not redlining is based on whether or not it’s deliberate?
That’s the difference between overt and systemic racism, and yeah, I’d assume that this is not overt. And I’d think (thus the post) that even if it’s not overt that it’s a problem.

but you’d have to look at the racial data

I’m not doing either the data or the process, someone else is. But it seems clear (?) to me that as soon as you’re doing financial products based on location that you’re going to be treading into racial territory. Chinatown comes to mind…There’s a ton of racial seggregation by location, that’s certain. Therefore financial products based on location as granular as postal code is going to carry some sort of racial overtones.

One question i would is how much smoothing is done by location. There may be too much smoothing to allow redlining. That doesn’t mean less technical users don’t imagine it is using location at a fiber grain.

You do a model comparison, with and without race as a factor.

If location/credit becomes insignificant when race is added, then redlining is present.

While pricing or underwriting based on location could lead to accusations of redlining, there are lots of insurance products where location is very relevant to level of risk. Take windstorm insurance in a coastal state for example. Policies located directly on the beach absolutely need to be charged more than inland risks for windstorm exposure. It would make perfect sense to have zips in coastal counties get flagged for some additional underwriting criteria. Flood insurance is another good example. Auto and homeowners insurance are also typically territorially rated, although that practice can cause accusations of disparate racial impact at times.

What product were you inquiring about?

2 Likes

Intent has nothing to do with being found guilty of redlining or any discrimination. All you need to do is show disparate impact. It doesn’t take fancy models, just tying race (or gender or religion or any protected class) to the applications and looking at the final outcomes. The use of both income and postal code would make me very nervous that there will be adverse impact as both can be used as close proxies for race.

All of the automated underwriting I worked on, including the request for further requirements, steered away from these two features. We may have varied requirements by state but that would have been for other legal reasons (like we couldn’t use some sort of third party report in HI or something for whatever reason). This is for the US and Canada might be different.

1 Like

It may be worth noting that this is the life insurance subforum, and I suspect (I’m a P&C guy) that many/most life products “should be” less sensitive to geographic considerations than most P&C products.

(I.e., I think there may be perfectly legal reasons to treat local ZIP codes x and y differently for auto insurance, but I’m not certain how you’d justify that for, say, whole life.)

1 Like

Good point. On the life side, I can see how any territorial rating/underwriting criteria could be problematic

I believe that in insurance, disparate impact is not necessarily enough to show illegal discrimination if there is a strong relationship to risk. So for P&C, i think charging more for policies closer to the coast might be ok if it could be shown that those homes are at much higher risk from hurricane, regardless of whether there is disparate outcome. Similarly for charging more for flood insurance in particular areas.

I agree that it’s hard to imagine an example like that for life insurance, where the use of fine grained location is clearly justified by risk.

I suppose an analogy might be refusing to underwrite people with certain pre-existing conditions. Would a life insurer have to worry about whether those conditions are correlated by race? I don’t know.