Predictive Analytics - Redlining? (Canada)

Just got off the phone with an insurance company who told me proudly that they are using predictive analytics to quick-issue policies. Two factors he highlighted were credit checks and postal code (location). He doubled down by noting that other companies are doing this as well.

So, they do a credit and location check, and based on that they determine what underwriting requirements to order. The end result, areas where some races may be predominant, and I guess may have lower creditworthiness overall, end up getting more medical requirements. More medical requirements results in more ratings and/or more denials. Which results in some races being charged more or being denied more.

Educate me here. Is this not Redlining? (where you charge people more based on location, where location is correlated by race, so you end up charging based on race, even if inadvertently).

I’m actually surprised this is legal in Canada, though i guess it is. Are the companies just open to being accused of redlining? Or maybe there’s other methods being used in conjunction with this data to remove any possibility of systemic racism?

Well it can be, but you’d have to look at the racial data and do your own analysis and then make a conclusion based on the evidence.

Also, if there’s hard evidence like company emails and phone calls explicitly saying that’s what they do.

Of course, how you even get that data is a hurdle. If they have an online quotation system, maybe you can mass apply by changing the zip code and then comparing it with demographic data and the results.

That itself could raise some ethics issues on falsely representing yourself, though.

Agree with CS. They’d still have to show correlation between the locations used and racial data. It could be redlining (I’d say probably), but might not be.

You’re saying that redlining vs not redlining is based on whether or not it’s deliberate?
That’s the difference between overt and systemic racism, and yeah, I’d assume that this is not overt. And I’d think (thus the post) that even if it’s not overt that it’s a problem.

but you’d have to look at the racial data

I’m not doing either the data or the process, someone else is. But it seems clear (?) to me that as soon as you’re doing financial products based on location that you’re going to be treading into racial territory. Chinatown comes to mind…There’s a ton of racial seggregation by location, that’s certain. Therefore financial products based on location as granular as postal code is going to carry some sort of racial overtones.

One question i would is how much smoothing is done by location. There may be too much smoothing to allow redlining. That doesn’t mean less technical users don’t imagine it is using location at a fiber grain.

You do a model comparison, with and without race as a factor.

If location/credit becomes insignificant when race is added, then redlining is present.

While pricing or underwriting based on location could lead to accusations of redlining, there are lots of insurance products where location is very relevant to level of risk. Take windstorm insurance in a coastal state for example. Policies located directly on the beach absolutely need to be charged more than inland risks for windstorm exposure. It would make perfect sense to have zips in coastal counties get flagged for some additional underwriting criteria. Flood insurance is another good example. Auto and homeowners insurance are also typically territorially rated, although that practice can cause accusations of disparate racial impact at times.

What product were you inquiring about?

2 Likes

Intent has nothing to do with being found guilty of redlining or any discrimination. All you need to do is show disparate impact. It doesn’t take fancy models, just tying race (or gender or religion or any protected class) to the applications and looking at the final outcomes. The use of both income and postal code would make me very nervous that there will be adverse impact as both can be used as close proxies for race.

All of the automated underwriting I worked on, including the request for further requirements, steered away from these two features. We may have varied requirements by state but that would have been for other legal reasons (like we couldn’t use some sort of third party report in HI or something for whatever reason). This is for the US and Canada might be different.

1 Like

It may be worth noting that this is the life insurance subforum, and I suspect (I’m a P&C guy) that many/most life products “should be” less sensitive to geographic considerations than most P&C products.

(I.e., I think there may be perfectly legal reasons to treat local ZIP codes x and y differently for auto insurance, but I’m not certain how you’d justify that for, say, whole life.)

1 Like

Good point. On the life side, I can see how any territorial rating/underwriting criteria could be problematic

I believe that in insurance, disparate impact is not necessarily enough to show illegal discrimination if there is a strong relationship to risk. So for P&C, i think charging more for policies closer to the coast might be ok if it could be shown that those homes are at much higher risk from hurricane, regardless of whether there is disparate outcome. Similarly for charging more for flood insurance in particular areas.

I agree that it’s hard to imagine an example like that for life insurance, where the use of fine grained location is clearly justified by risk.

I suppose an analogy might be refusing to underwrite people with certain pre-existing conditions. Would a life insurer have to worry about whether those conditions are correlated by race? I don’t know.

This topic is still on my mind, and more so today because I just ran into a Phd in Actsci on a discord forum (an outdoors forum curiously) who’s doing a paper on discrimination in insurance.
In terms of the exampleyou gave, I don’t get how using location data as granular as postal code is anything other than redlining. YOu’re talking, as I mentioned before, stuff like chinatown, or reservations at the postal code level. I’m still suspicious that since it seems the industry (or at least nobody here) is looking at it, then it’s likely present even if inadvertently.
A very good example I suspect is Brampton/Mississauga in Ontario. They have a very heavy population of South Asian immigrants - the rednecks around here call Brampton Brangladesh. And I know from my friends on the retail side in P&C that they’ll overtly try and dissuade anyone from those postal codes from placing car insurance through their agencies. Apparently there’s a high incidence of fraud (so I’m told). But is there redlining going on? Don’t know unless you run the numbers and prove that it’s not.

I may, once I finish my current masters, head to the local university and see if I can convince an actsci prof to let me do a masters on the subject.

Well, let’s consider national flood insurance, or perhaps some kind of flood insurance offered by a private carrier.

Flood risk can vary in a very fine grained way by location. In principle, if you are at the top of a hill, then you may have very little flood risk, while your near neighbor at the bottom of the hill has a lot of flood risk.

That might lead to territories that are as fine grained as postal code, or even more fine grained. And I believe this is the case for the zones used by national flood insurance.

Here you have a clear causal relationship between the insured peril, and location. Being at a lower elevation closer to water can cause you to have a flood loss.

Using auto insurance, per your example, then i think it becomes morally more ambiguous. I don’t know enough about the law to know how it compares legally.

Unless there’s some race that likes to live in top of hills! Like something out of Dr. Seuss.

Ive been told stuff like this actually happens though. I was driving down the 401 highway were they were building walls of townhouses right against the highway. My reaction was, who would buy a house there. And I was told…Arabs. I was told that Arabs like their homes in high visibility locations where people see them. So if that’s true, we’ve got postal codes with a nonintuitive for me clustering by race.and maybe locations near highways changes house or car insurance rates.

It goes back to the same thing. Redlining likely isn’t overt, but if it’s not tested for to prove its not there, then there’s a real possibility that it’s happening. Even if living near a highway or on top of a hill means there exists claims that justify some sort of pricing difference.

Or deaf people, who can’t hear the highway noises.
We have plenty of these going up in SoCal, with awesome balconies overlooking the freeway. I’ve lived in a place with a balcony too close to a freeway (wasn’t even that close, but it was busy 24/7/52/365). Effing waste. Also, for SoCal, if you live within a mile of a busy (and often slow-trafficked) freeway, you’re more likely to develop asthma (more in children who live there during developing years).
So, naturally, who won’t live there? People with money and thus choices.

This is insane. I’m in an outdoors discord for camping at uwaterloo, and an actsci PhD posts there…we chat, they’re doing their studies on exactly this area. And their supervisor is someone I’ve heard of (and communicated with on the old ao). They indicated that there’s been about 10 decent papers in this area, it’s emerging.
What a wild ride bumping into this person. I’m stoked to get some actual reading material and find people working on this.

I think we have four possible situations.

The first continues with flood risk, where race is completely unrelated to the peril. It may be that minorities are living in the areas with higher flood risk due to other effects of systemic racism, creating disparate impact.

A second situation is one in which the hazard itself is potentially created by the systemic racism. So we could imagine a neighborhood having higher credit risk, or crime risk. In this case, these hazards are at least partially caused by the neighborhood being lower income. The insurer may not be intending to charge more for minorities, and is charging them more because their costs are higher. There is disparate impact which reflects risk based pricing.

A third situation is one in which an insurer charges minority neighborhoods more even if though the costs are not higher, as, I believe, Allstate is charged of doing in some state for reasons related to price elasticity. Impact by race is not the intent, but there is disparate impact that is not risk-based.

Finally, there is the situation in which a company charges minority neighborhoods more because they know that minorities live there.

All of these situations are problematic in the sense that they involve systemic racism. I think the interesting question from a public policy position is the role the insurer should play. I think that, in the US, the decision has been made that disparate impact may be OK as long as the insurer is charging a risk based price. I assume the idea is keeping an insurer from charging a risk based price can cause a lot of market problems, and there are other ways to address systemic racism. The second two scenarios above, in which there is disparate impact that is not risk based, are illegal, as I understand it. However, there is some controversy about whether insurers should be allowed disparate impact as long as it is risk based. Does this mean the first two scenarios are not redlining? I don’t know.

1 Like