Let's talk loss trending

It seems like actuarial literature overwhelmingly chooses one trend for the past and applies it to all years, and the same for projecting loss trend forward. Even with “one trend for the past, one trend for the future” there’s exactly one trend rate chosen for each, and it doesn’t vary by year. For most situations, this is probably OK. For the current economic environment, it arguably isn’t.

Is there literature out there that discusses choosing different trend rates by year, prior/future? I’m asking becuase with inflation by whatever measure you prefer running hotter than it has in prior years, at some point this year will become a prior year in ratemaking and choosing one and only one loss trend would seem to create problems. The same goes for projecting forward, if you think the current inflation rate is “transitory” and will slow before the effective date you’re choosing for changes or you’re doing planning and trying to project out a few years.

Use an indexing approach to bring different years forward. IE prior trend = current year index/prior year index for the different years. The projected trend from current should still be the same regardless though in a ‘two-step’ approach. I can’t cite any literature directly, but have seen this in filings in practice.

3 Likes

I agree with LivelySedative, and to add, I think people often think of the historical and prospective loss trend as just the same thing but there is a subtle difference. With the historical trend you’re really trying to take these historical losses and make them apples to apples with the current period, while in the prospective trend you’re making a forecast as to how the current period will change into your prospective.

Also, on the topic of multiple historical trends, that’s entirely reasonable if you have the data to support it. It’s easiest to see if there were shocks in your loss history. For example if you have lots of data you might see your ultimate severities inflect in a couple spots but maintain a consistent trend between inflections. It’s more accurate to apply the trend relevant these periods (say 2% for a flatter period and then 4% post inflection) in adjusting those historical losses to current. As opposed to just selecting something like 3% and applying it equally to all.

Exam 5 covers two-step trending for losses (and premium):

You can either index to the latest period like ALivelySedative mentioned, or you can fit a trend line across your experience period. There are some distinctions in why you would choose one method or another, but in most cases it won’t be a material difference. In both cases you project forward to the prospective period with a selected trend.

To clarify here: what I’m describing isn’t the two-step trend process out of Exam 5, where you have Trend A for history and Trend B for the future. That makes sense, it’s something I’ve historically used. It’s more a “two-step trend in the experience period” as Dan mentioned; trend A applies to a few years of experience, Trend B applies to the rest of the period. My question comes down to when that’s appropriate (and maybe even necessary) over just Trend A for history.

The “shock inflation” example Dan references is a good illustration. If say the first 2 years had “normal” loss trend, the next 2 years had a shock that increased loss trend, and then the 5th (final) year had higher than normal trend as a result of the shock from preceding years, it would seem that one loss trend over all years is inappropriate; certain years may be under-trended, others may be over-trended. But do you have 2 loss trends? 3 loss trends?

And maybe for additional clarification: do we gain accuracy with ratemaking if we have different trends by year in the experience period? I’m not so much asking a question of “how should I do ratemaking” as much as I’m lobbing it as a thought question and I can’t find anything in CAS literature that addresses this.

“Actuarial judgement” :man_shrugging:
I would say like any model, you’ll gain accuracy by having more (accurate) data/trends. The question is how much more accurate are you going to get, and also is it worth it to spend the time doing that additional analysis. I don’t think there’s a straight answer to this.

1 Like

Like many actuarial things I think it really boils down to a question of just how credible your data is. In an ideal case where we have effectively unlimited data then I’d say it does increase the accuracy of your analysis to use multiple historical trends for different ranges. We’d do this most commonly at a large insurer working on Homeowners for some of the largest states, but only if there were interesting inflections (ideally with some sort of explanation). Another common one is a significant mix shift in the underlying risk that you know about because there was a big effort by underwriting to change the type of risk you were writing.

But again the real tradeoff is the risk of adjusting for what is truly noise and over-refining your analysis, not to mention the extra time + explanation, etc.

Edit: though that example actually gets into some interesting trend territory where you can have multiple cohorts of exposure experiencing different trend rates such that they might individually be experiencing, say, 2% trend but combined they’re experiencing 5%, or -5%, because the amount of business is changing between the two cohorts and they have very different average severities/premiums/etc. I think it’s more a case on the premium trend side though.

Hm. Dan’s answer is better than mine.

:laughing: I think it all adds helpful context

Nothing to stop you from using multiple trends in the projected period, if there’s something concrete you can cling to (policy expiration for a runoff program, law implementation date, conversion of policies from one program to another, etc.).

Otherwise, simpler is always better. Hard to justify otherwise.

Oops didn’t notice we’re talking loss trends only. But same logic applies.

There really isn’t a big difference between using two-step trend to separate the historical and prospective periods, and using two-step trend over just the historical period. All you’re saying is that something is changed with the loss experience and you are trying to account for that change.

How you justify those changes will be really specific to the LOB you’re modeling. If you are looking at a casualty line that is sensitive to jury awards, then social inflation will appear more like a stair-step than a straight loss trend. In Worker’s Comp, a legislative change will be a one-time calendar period adjustment. If you’re in personal lines with a lot of data, you can even form loss trend as a regression problem, and backtest to measure the tradeoff between model complexity and accuracy.

It will also depend on who you are justifying to. If you need to justify the loss trend to a regulator or to executive management, then your limiting factor is what you can explain or defend, rather than anything statistical.

That’s where I’m going with my question. Has anyone in the past tried to tackle this in a CAS paper?

I’ve thought that for years, and yet people continue to want to crank out papers and presentations discussing how to try and whittle down the error term in calculations by all kinds of esoteric methods.

What’s the phrase I was given many years ago, that apparently came from some CAS presentation on this? Ah, yes - delusional exactitude. The idea that we can be incredibly precise with estimates as if that makes them more meaningful for decision-making.

I do agree, I think there’s a tipping point at which complexity overrides accuracy. (Rant about how predictive analytics and data science abust the shit out of this.) But I also concede there’s times where additional complexity is needed for more meaningful accuracy; again, that gets to the question I’m asking.

I would argue there’s a big difference. With “two-step trend, one for historical years and one for future years” you implicitly state “this trend I’m applying to the experience period? It’s accurate and representative.” With “two-step trend over the historical period” you’re saying “there’s been a shift at some point, we need to split the experience period such that we more accurately reflect the change in trend” - but it’s more nuanced than that.

Say over your 5-year period loss trend went from 1% to 8% due to inflation, material shortages, whatever, and it was really apparent by Year 4 that 1% was no longer the right trend. There’s clearly a big shift and it’s probably material; however, the point at which you break your experience period up to apply those two trends implicitly pretends that the trend change was sudden and can be clearly attributed to some point in time. In reality, that’s … probably not the case. 8% might be good for say Year 5 and 1% might be good for Year 1, but if that shift occurred during Years 2-4 and you say “we’re breaking at Year 4” you’re now probably under-trending Years 2 and 3 and over-trending Year 4. [Cue discussion on accuracy vs. complexity.] And if the next year comes in at 10%, it aggravates the problem.

If you have some gradual change in the trend as you describe then I’d probably be more keen to go with a single rate and might just be more reluctant to include older years in the weighting. While it’s fun to add nuance with multiple historical trends, etc. I tend to only do that if it’s relatively easy to make the argument in support (such as with a clear inflection).

Is it possible that by limiting the trending factors, we are essentially preserving the “process risk” in the historical years?

If we try to completely eliminate the “parameter risk” in the trending factors (say, each historical year gets its own trending factor), we would invariably remove (maybe a non-insignificant portion of) the “process risk” (that is truly due to randomness) that is valuable for our analysis.

I have seen an index type approach used fairly commonly, though I don’t come from a traditional ratemaking/rate indication type background. I’ve often tried to smooth it out (i.e. if the historical data looks like 5%, 6%, 4%, 5%, I’ll probably just pick 5%) but if there’s clear ups and downs in the history in different periods, it can be useful to get a bit more granular sometimes. E.g. if the trend was 1% for several years (let’s say 2010-2015) then 9% for several years (say 2015-2020), and you’re picking something closer to 5% for that whole period, and you looked at the trended 2016 year number specifically, it’s going to be undertrended.

Probably depends on the purpose of your analysis and how exactly you’re using the results.

I think one thing that happens is you have artifacts which show volatility and uncertainty in a framework that doesn’t usually explicitly represent uncertainty in estimated trend patterns, for example there is usually no standard errors reported.

I think it’s helpful to think of there being multiple sources or variation. Some is per policy (or claim, group, whatever) and usually “diversifies” away with enough data per year (or month, quarter, etc.) Some is correlated across an entire year, but not across years. It might look like a constant Gaussian “error” like you’d expect in a linear model. And some is correlated across years in more complicated ways.

The single trend model basically says: we have a single trend, and then remaining error that is correlated within each year, but not across years. We won’t worry explicitly about the error in the trend, partially because we still show volatility by trending each year and showing them separately, as opposed to just giving a single prediction for our target year. Also we are going around to consider error in the trend when we select it.

The two trend process is basically the same, but allows a one time change in the trend. It allows extra judgement in selecting the trend, as i’ve seen it used.

Then approaches like ARIMA pretty explicitly allow much more complicated correlations in variation across years. Or some state space approaches do similar things through a bayesian approach.

Modeling a separate tend for each year might look like a single or two trend model where you just predict your period of interest, and “interpret” that as a separate trend for each individual year. Then that is really the same approach, but you are not showing volatility, and therefore uncertainty, in any way, which might not be what you want. Or it might be re-allocating the uncorrelated per year errors (which i think you are calling process risk) to some kind of variation that is correlated across years, which might be the wrong model.

1 Like