Let's talk loss trending

It seems like actuarial literature overwhelmingly chooses one trend for the past and applies it to all years, and the same for projecting loss trend forward. Even with “one trend for the past, one trend for the future” there’s exactly one trend rate chosen for each, and it doesn’t vary by year. For most situations, this is probably OK. For the current economic environment, it arguably isn’t.

Is there literature out there that discusses choosing different trend rates by year, prior/future? I’m asking becuase with inflation by whatever measure you prefer running hotter than it has in prior years, at some point this year will become a prior year in ratemaking and choosing one and only one loss trend would seem to create problems. The same goes for projecting forward, if you think the current inflation rate is “transitory” and will slow before the effective date you’re choosing for changes or you’re doing planning and trying to project out a few years.

Use an indexing approach to bring different years forward. IE prior trend = current year index/prior year index for the different years. The projected trend from current should still be the same regardless though in a ‘two-step’ approach. I can’t cite any literature directly, but have seen this in filings in practice.


I agree with LivelySedative, and to add, I think people often think of the historical and prospective loss trend as just the same thing but there is a subtle difference. With the historical trend you’re really trying to take these historical losses and make them apples to apples with the current period, while in the prospective trend you’re making a forecast as to how the current period will change into your prospective.

Also, on the topic of multiple historical trends, that’s entirely reasonable if you have the data to support it. It’s easiest to see if there were shocks in your loss history. For example if you have lots of data you might see your ultimate severities inflect in a couple spots but maintain a consistent trend between inflections. It’s more accurate to apply the trend relevant these periods (say 2% for a flatter period and then 4% post inflection) in adjusting those historical losses to current. As opposed to just selecting something like 3% and applying it equally to all.

Exam 5 covers two-step trending for losses (and premium):

You can either index to the latest period like ALivelySedative mentioned, or you can fit a trend line across your experience period. There are some distinctions in why you would choose one method or another, but in most cases it won’t be a material difference. In both cases you project forward to the prospective period with a selected trend.

To clarify here: what I’m describing isn’t the two-step trend process out of Exam 5, where you have Trend A for history and Trend B for the future. That makes sense, it’s something I’ve historically used. It’s more a “two-step trend in the experience period” as Dan mentioned; trend A applies to a few years of experience, Trend B applies to the rest of the period. My question comes down to when that’s appropriate (and maybe even necessary) over just Trend A for history.

The “shock inflation” example Dan references is a good illustration. If say the first 2 years had “normal” loss trend, the next 2 years had a shock that increased loss trend, and then the 5th (final) year had higher than normal trend as a result of the shock from preceding years, it would seem that one loss trend over all years is inappropriate; certain years may be under-trended, others may be over-trended. But do you have 2 loss trends? 3 loss trends?

And maybe for additional clarification: do we gain accuracy with ratemaking if we have different trends by year in the experience period? I’m not so much asking a question of “how should I do ratemaking” as much as I’m lobbing it as a thought question and I can’t find anything in CAS literature that addresses this.

“Actuarial judgement” :man_shrugging:
I would say like any model, you’ll gain accuracy by having more (accurate) data/trends. The question is how much more accurate are you going to get, and also is it worth it to spend the time doing that additional analysis. I don’t think there’s a straight answer to this.

1 Like

Like many actuarial things I think it really boils down to a question of just how credible your data is. In an ideal case where we have effectively unlimited data then I’d say it does increase the accuracy of your analysis to use multiple historical trends for different ranges. We’d do this most commonly at a large insurer working on Homeowners for some of the largest states, but only if there were interesting inflections (ideally with some sort of explanation). Another common one is a significant mix shift in the underlying risk that you know about because there was a big effort by underwriting to change the type of risk you were writing.

But again the real tradeoff is the risk of adjusting for what is truly noise and over-refining your analysis, not to mention the extra time + explanation, etc.

Edit: though that example actually gets into some interesting trend territory where you can have multiple cohorts of exposure experiencing different trend rates such that they might individually be experiencing, say, 2% trend but combined they’re experiencing 5%, or -5%, because the amount of business is changing between the two cohorts and they have very different average severities/premiums/etc. I think it’s more a case on the premium trend side though.

Hm. Dan’s answer is better than mine.

:laughing: I think it all adds helpful context

Nothing to stop you from using multiple trends in the projected period, if there’s something concrete you can cling to (policy expiration for a runoff program, law implementation date, conversion of policies from one program to another, etc.).

Otherwise, simpler is always better. Hard to justify otherwise.

Oops didn’t notice we’re talking loss trends only. But same logic applies.

There really isn’t a big difference between using two-step trend to separate the historical and prospective periods, and using two-step trend over just the historical period. All you’re saying is that something is changed with the loss experience and you are trying to account for that change.

How you justify those changes will be really specific to the LOB you’re modeling. If you are looking at a casualty line that is sensitive to jury awards, then social inflation will appear more like a stair-step than a straight loss trend. In Worker’s Comp, a legislative change will be a one-time calendar period adjustment. If you’re in personal lines with a lot of data, you can even form loss trend as a regression problem, and backtest to measure the tradeoff between model complexity and accuracy.

It will also depend on who you are justifying to. If you need to justify the loss trend to a regulator or to executive management, then your limiting factor is what you can explain or defend, rather than anything statistical.

That’s where I’m going with my question. Has anyone in the past tried to tackle this in a CAS paper?

I’ve thought that for years, and yet people continue to want to crank out papers and presentations discussing how to try and whittle down the error term in calculations by all kinds of esoteric methods.

What’s the phrase I was given many years ago, that apparently came from some CAS presentation on this? Ah, yes - delusional exactitude. The idea that we can be incredibly precise with estimates as if that makes them more meaningful for decision-making.

I do agree, I think there’s a tipping point at which complexity overrides accuracy. (Rant about how predictive analytics and data science abust the shit out of this.) But I also concede there’s times where additional complexity is needed for more meaningful accuracy; again, that gets to the question I’m asking.

I would argue there’s a big difference. With “two-step trend, one for historical years and one for future years” you implicitly state “this trend I’m applying to the experience period? It’s accurate and representative.” With “two-step trend over the historical period” you’re saying “there’s been a shift at some point, we need to split the experience period such that we more accurately reflect the change in trend” - but it’s more nuanced than that.

Say over your 5-year period loss trend went from 1% to 8% due to inflation, material shortages, whatever, and it was really apparent by Year 4 that 1% was no longer the right trend. There’s clearly a big shift and it’s probably material; however, the point at which you break your experience period up to apply those two trends implicitly pretends that the trend change was sudden and can be clearly attributed to some point in time. In reality, that’s … probably not the case. 8% might be good for say Year 5 and 1% might be good for Year 1, but if that shift occurred during Years 2-4 and you say “we’re breaking at Year 4” you’re now probably under-trending Years 2 and 3 and over-trending Year 4. [Cue discussion on accuracy vs. complexity.] And if the next year comes in at 10%, it aggravates the problem.

If you have some gradual change in the trend as you describe then I’d probably be more keen to go with a single rate and might just be more reluctant to include older years in the weighting. While it’s fun to add nuance with multiple historical trends, etc. I tend to only do that if it’s relatively easy to make the argument in support (such as with a clear inflection).

Is it possible that by limiting the trending factors, we are essentially preserving the “process risk” in the historical years?

If we try to completely eliminate the “parameter risk” in the trending factors (say, each historical year gets its own trending factor), we would invariably remove (maybe a non-insignificant portion of) the “process risk” (that is truly due to randomness) that is valuable for our analysis.