Paper/resource for bending Severity CDF

I’m on a project where I need to do a reinsurance simulation analysis, which I have experience doing at a broker. However, on the carrier side of the house, I’m not sure how to take our existing severity CDFs and modify them to hit certain layer targets, ie if the curve suggests a 5% loss in layer for the 10M xs 10M layer but my analysis says 7%. Is there an algorithm that is known that can systematically adjust CDFs to hit layer targets? I’m pretty sure all of the brokers do this, but I can’t think of how to best tackle this challenge from the other side. Thanks!

I think this might be what you are looking for:

Sahasrabuddhe, R., “Claims Development by Layer: The Relationship between Claims Development Patterns, Trend and Claim Size Models,” Casualty Actuarial Society E-Forum, Fall 2010, Volume 1 (revised January 2, 2013). Including errata.

Or maybe not, after rereading your post.

Several years ago, our reinsurance broker shared with me an tool that:

  • given a set of layers, relative frequencies, and mean severities within the layer; and
  • given a starting cdf

…would “bend” the cdf to fit the details provided for the layer.

The tool was built in Excel and uses Solver as it works through the combinations, although I haven’t audited the code/logic to fully grok its magic. I just like (or at least am content) with the results.

It’s not my IP to share, unfortunately. But it might be worth having your reinsurance team put you in contact with your reinsurance broker(s) to see if they have some code or a tool they can let you use.

It is unclear to me what you are asking for.

For starters, do you use CDF to mean Cumulative Distribution Function (a probability distribution used to model claims sizes) or Cumulative Development Factors (loss development factors by layer)?

Ah, apologies. Cumulative distribution function. Basically, what are mechanics/methods to bend existing (usually industry) curves to achieve certain layer targets, while still having a coherent/reasonable distribution. In the case of excess layers you can’t really do a full MLE/standard curve fitting technique since there’s such little volume in the relevant layers

Do you wish to simulate individual losses to the layers?

How would your analysis come up with 7% when your existing CDF is saying 5%? Does that mean you are exposure rating with a CDF and getting 5% and experience rating and getting 7%?

In this case I would do an experience and exposure rating and make a blended selection rather than bend the curve if I understand the question correctly.

Yeah or let’s say experience rating is 9% so you get to a blended 7% with 50/50 weighting. How would you paramaterize a curve for stochastic modeling when you only have the existing distribution that hits a 5% target?

This works for the pricing/loss estimation piece but if you’re doing a volatility analysis you still need a new curve to put into your stochastic model that ties to the blended target.

Why not increase frequency by 40%? [noting 7% / 5% = 1.40 ] and choose a distribution such as a negative binomial and pick a volatility factor (your choice of std dev or variance or coefficient of variation) that gets your expected loss to approximate the observed volatility in the experience…

This ties to the blended target by matching the 1st two moments

If the experience in the layer has very low credibility, using it exclusively for a stochastic model would seem to imply a level of precision that may not be applicable.

He’s already addressed that with his credibility weighting on the experience and exposure

The answer to this dilemma is that you need to convolve the frequency and severity curves.

Don’t think of this as modifying the severity CDF. Think of this as 2 separate distributions that must be “mingled”

It sounds like you already have layer targets in mind, so this is more a question about implementation.

If your software is expecting a parameterized distribution such as a lognormal, you will need to refit the curve using chi-squared instead of MLE to hit your layer targets. There is no guarantee that the distribution will be able to match your targets however, because the curve can only bend in specific ways.

If your software allows you to enter in custom distributions, the easiest thing would be to use your layer targets as is and then interpolate between them. If you want to be fancier, you can set up a bunch of splines to ensure the derivatives are smooth.

If you are asking about how to blend experience with a reference curve, that is a more general question. I don’t believe there are any standard approaches, but there are papers that reference the problem, such as this one: