I believe driving with/without knuckleheads is a risk factor.
Or move the deer crossing signs so the deer aren’t crossing busy roads.
ETA: dammit, ninja’d by @1695814 , I knew I should have read the whole thread before posting.
Top of mind, I’m not sure of a state that doesn’t require at least Use and File approval of rates other than Wyoming.
Bump, for The Autopian’s take (they seem to be better at this than other others’-news-gathering sites):
I think I’ll ask the car company to drop the price of the new car by $X thousands in exchange for my signing away for this. I’ll need the dough for the additional premiums for a car that I’ll never have an accident in (cuz good driver). Then, if Progressive wants me to let them monitor me, that will cost them $X hundreds of dollars up front for the privilege.
Would be curious to hear our P&C folks’ rebuttal to this. Is hard braking ACTUALLY correlated to higher claims or is this just a favorite theory?
I haven’t been directly involved with building the telematics-based pricing algorithms at my employer, but when the folks in our skunk works have described their wizardry, it stuck in my head as follows:
-
Collect a bunch of parameters over a few billion kilometers of driving. Parameters are both traditional and those that are collectible with our telematics tool.
-
Fit a model to predict loss costs.
-
Build a pricing model around that.
-
Implement and hopefully generate profit.
-
Lather, rinse, repeat.
I haven’t seen any univariate analysis comparing braking to loss frequency, or any PCA outputs from the skunk works. They’re obviously very protective of such information, and I suspect anyone here who has actually been in the weeds feels uncomfortable sharing their experience because of trade secret concerns.
I can make an educated guess that one or more variables built from “hard braking” are predictive. It might not be a simple relationship, however.
I could easily imagine using something like “([(braking events with force between x and y)+2*(braking events with force in excess of y)+(hard acceleration events)]/(distance driven))^2” in a telematics pricing algorithm.
But that’s pure speculation on my part. I haven’t seen an actual algorithm…but that’s the sort of construction I played with back when I was doing credit scoring.
The Autopian article does have a point – the data IS crap.
However, most of the data we work with in the hyper-detailed pricing models in use for personal auto in the US and elsewhere is similarly crappy.
Yet we still find predictive signal in the midst of all the crap, because in good times at least, personal auto is SO competitive that carriers are eager to eke out even tiny bits of lift in the models as we try to identify customers whom we think are slightly less risky than our competitors perceive.
When I was doing credit scoring back in the dark ages / a few lives ago, the modelers from the credit card world though those of us in the insurance world were insane with how enthused we were with the modest lift we saw with individual variables, stuff they would consider almost too marginal to use.
Perhaps we should recognize the crappiness and walk away from such information.
However, if we limit ourselves to non-crappy variables, our rating/underwriting would be a LOT less segmented, and many people would be paying more for their insurance.
And, just because of the nature of the beast…some insurers will seek ways to use the crappy data anyway.
Yeah, I’m pretty sure we do that in health too, but I’m in worksite and we just don’t have that many variables to work with. P&C is a whole ‘nother world in that regard.
Non-actuaries may be shocked to learn how many Model Year 1 vehicles there are, that are 2,023 years old. Same with roofs, housing electrical.
Though once I went digging on a 200-year-old roof that was clearly wrong and it turned out we insured 1 historical home with a thatch roof that was 200 years old.
I’m late to the party and didn’t read every post, but have a big enough ego to add my anecdotal data:
I have Progressive Snapshot, and it gives me an A rating which pretty much right there tells you how bad the rating system is. I have somewhere around 0.7 - 0.75 hard brakes a week, so those aren’t weighted crazily heavily. My favorable rating comes from the fact that I don’t use my phone while driving, don’t have fast accelerations (I drive a minivan so can’t), don’t drive when it’s past my bedtime, and don’t drive a ton of miles period.
I don’t have Progressive and am not interested in switching, but this level of reporting detail sounds interesting to me.
Not only has OnStar stopped sharing data with insurance data warehouses…
…they’re discontinuing the service that collected the data in the first place. It seems that even though it was an optional service, more than a few people were signed up without realizing it.
I’m not surprised they axed it. I am surprised how quickly they axed it. Legal team is likely unhappy.
I suspect there are some attorneys salivating over the class-action potential.
I saw that one coming a mile away.
I can’t fathom how a policy like this one got through the legal dept in the first place.
Discovery is going to be interesting when the class actions happen. I suspect legal got overruled by a few executives looking to boost revenues.
Remember: At least some of your smartphone apps spy on you.
Question for the PL actuaries here, why doesn’t the use of these scores violate the socially acceptable principle aspect of selecting rating variables? If the utility of the variables outweigh any social acceptance considerations, then why have the socially acceptable principle at all?
I’m not currently a PL actuary (although I did work on credit scoring in a past life). I suspect that professionalism concerns would limit the ability of folks close to the issue to comment in an informed manner.
To the extent that the data has predictive value in determining the cost of transferring risk and is applied in an equitable way as regards such predictions, and to the extent that the use of such data passes legislative and regulatory muster, I think its use does not appear to violate ASOP 12 or the Ratemaking Statement of Principles (to the extent it still applies).
The data is publicly available and its potential use is (or should be) covered in the privacy agreements and disclosures that are read by almost no one.
The fact that the data is publicly available, even if consent was naïvely granted for it to be such, is worth discussing, especially with regulators and legislators…but I’m not optimistic about that working out, at least in the US.
Then you also have the challenge of objectively defining what data should or should not be used, with an eye towards how things might evolve in the future.
Official public records, MVR, and CLUE are all generally considered acceptable.
Credit data is accepted in many states (or has been mostly-addresed by regulation).
Insurer data-sharing populates CLUE, and has been extended to include data-sharing about whether customers have had lapses in coverage, and is being extended to include sharing of data collected via telematics programs.
If a person were to share their drag-racing exploits on social media, and that information were linked to an application for coverage by an underwriter…would that be kosher to use?
Assuming consent were given (through those never-read click-thru notices), why would data aggregated through data-brokers not be as acceptable? Why would it be unacceptable for insurance use, but not for other uses?
Of course, for more fun…and to look ahead to what might be further down the slippery slope… what if the industry got together, set up a network of license plate readers, enhanced with AI processing to identify risky behavior, and fed the data into an industry database. Assuming data quality issues could be resolved… would that data be kosher to use?
Thank you for your thought out response.