ISO Size of Risk Rating

I’m assuming this is … appropriate to discuss, since it’s got actuarial implications? But if not, mods - you know what to do.

Is anyone using ISO’s GL Size of Risk Rating methodology? I see a lot of companies filing to non-adopt it, and in my analysis it looked like an overall net decrease (makes sense, most premium is in larger policies that have more exposures) but by policy and class changes are ALL over the place and it gives me real heebie-jeebies about using it.

1 Like

Just a reminder that actuaries should be cautious about any exchange of information that might signal pricing decisions. This isn’t a CAS meeting, but these guidelines from the CAS seem relevant:

In analyzing whether information to be exchanged at any Casualty Actuarial Society meeting or seminar is acceptable under antitrust guidelines, two critical questions must be asked. These are:

  1. How does the information relate to the competitive behavior of the companies or firms represented by participants?
  2. How does the information affect the independent business decisions of the companies or firms represented by participants?

As a general rule, if the exchange of information relates to the future competitive behavior of an individual company or will affect the independent business decisions of an individual company, then it is prohibited by these guidelines. More specific guidelines are as follows:

  1. Discussion or exchange of information at Casualty Actuarial Society meetings or seminars concerning future price information or future competitive positions of an individual company or companies are prohibited.
  2. Discussions or exchange of information at Casualty Actuarial Society meetings or seminars concerning current and future underwriting rules that deal with the eligibility for insurance with a particular company are prohibited.
  3. Information concerning current experience of an individual competitor may, in some circumstances, be viewed as a means of “signaling” future pricing or business decisions. It is, therefore, potentially suspect, and should not be presented or exchanged without an affirmatively stated purpose that is consistent with current industry­wide data or experience and with competitive objectives.
  4. Where an interpretation or analysis of information concerning past or current experience or prices is exchanged, the risk that the collective action will be linked to future market conduct is substantially increased. The prediction of a trend and its implications is, as a general rule, a matter for individual and independent decision ­making.
  5. A description of an actuarial methodology or mode of analysis of data and its logical internal consistency and past predictive accuracy is not a violation of the antitrust laws. Such a description, however, must be undertaken with extreme care to avoid being viewed as a means of “signaling” future pricing decisions. Any application or example of the methodology or analysis should be presented using insurance company experience that is generally available to the public or is hypothetical in nature rather than the past or current experience of any actual individual competitor.

That being said, the mods have discussed this and think there’s a lot of discussion that can be had within those constraints.

(Also, be aware that if there were to be an ABCD complaint, this board’s policy is to give posts to the ABCD, even if they’ve been deleted.)

3 Likes

I haven’t done pricing/product work in several years, but back in the day this would have probably been one of those things I’d be filing to non-adopt primarily because IT was giving me some insane length of time before they’d be able to provide systems support.

I mean, there’s that too, but that’s not a consideration here.

I’m trying to figure out how to word this and stay within guidelines here, but any questions I have ultimately lead to I think something’s wrong with ISO’s Size of Risk values re: relationship of premiums to exposures and I’m curious if anyone else has taken a look at it.

Add: I’m going throw an example in here, purely hypothetical but will hopefully illustrate my concern.

Say you have a risk in class code 46912 (Race Tracks - Operators NOC) in Michigan, territory 503. It has $25,000 in gross sales, the premium basis is gross sales and the divisor is 1000 so loss costs are “per $1000 of gross sales” and it’s exposure is 25 units (25000 / 1000). It’s ISO loss cost (as of 5/1/2024) is $109. Its current Size of Risk loss cost (as of 7/1/2023) is $112, it would look up an exposure relativity factor from table 109, and it wishes to have a 1M/2M occ/agg limit so its ILF (from ILT 2) is 1.70.

  • Normal ISO rating: 25 x 109 x 1.70 = $4633 (rounded to the nearest dollar)
  • ISO Size of Risk rating: 25 x 112 x 5.0713 (the relativity factor from table 109 at 25 units) x 1.70 = $24,139

So before we even consider any other rating variables - especially LCM - Size of Risk rating has produced a base premium [exposure x loss cost x exposure relativity x ILF] that is nearly the entire gross sales of the risk. Even backing it off to base limits of 100/200, it’s still a base premium of $14,200 or about 56% of the gross sales.

Ted, this is a far-fetched example, no race track operator will only have $25,000 in gross sales. Maybe … or maybe it isn’t. I’m an actuary, I don’t run race tracks, I don’t know what annual gross sales are for the average race track, especially smaller race tracks. Even at $100,000 of gross sales, SoR at 1M/2M limits gives a base rated premium [before application of other rating variables] of over $52,000 or over half the gross sales; “normal” rating would be $18,530.

That just seems … weird to me. It makes me wonder if the Size of Risk methodology isn’t being overly punitive for some smaller risks which ISO’s work says have worse experience.

1 Like

OK, I think I can better explain part of my issue here.

I think ISO is saying “after we account for all the factors that would normally apply after loss costs - so not just package mods and schedule mods and all the other rating variables you insurers use in rating, but experience mods as well - smaller risks still have worse experience than larger risks. And the only way possible to adjust for that is to use Size of Risk factors.” Which either implies that smaller risks have [in some cases, significantly] higher claim frequencies (per exposure) than larger risks, or smaller risks have [in some cases, significantly] higher claim severities (per claim) than larger risks. Or, a combination of the two - but, I’ve learned through the actuarial exams that claim frequency is more predictable, claim severity is more random chance, so it’s probably claim frequency driving these relativity factors.

If you happen to land in SOR Premises table 104, it’s so significant that if you’re half the size of the “default” risk [1000 exposure units for rating], your experience is so poor you need a factor of 1.4451 to account for that difference on top of any experience mod factor that might result [where the experience mod factor is now impacted by SOR rating - or at least that’s my interpretation of page B-2]. And there’s other questions as well, but again … I’m trying to kickstart a discussion here.

And I don’t think I’d spend this much time on the topic, except right now this is an optional plan, but this could become a mandatory plan down the road and I want to have confidence that what ISO did at least passes the high-level sniff test - and right now, I don’t.

It’s an adjustment to recognize the fact that the exposures aren’t really fully independent. So if you had say, 200 contractors with $100K in gross sales, in total they would overall have higher losses than one company with 200*$100K = $20M in gross sales, even though the total exposure base is the same. I don’t have the actual plan in front of me, so those numbers are totally made up.

It could be that larger companies may be able to afford risk management departments to control their losses, or smaller risks may be systematically under-reporting their exposures. There could be a multitude of reasons, but ISO doesn’t typically give an explanation as to what the reason is. They build a new model and say that this new model seems more predictive.

Thank you for the response. I’m going to parse this a little, it should make sense why.

Let’s assume that this [a combination of small risks will have higher losses than one really large risk] is true. It very likely is, we can argue to what extent. [It’s GL, I don’t know how much dependence there is between locations Probably zero in some cases, probably not zero in others, but my hunch is that it’s still relatively small where it exists.] Is the pure premium difference still that significant, considering that

should already be reflected in the experience mod, both for large risks [which have higher credibility] and small risks [which have lower credibility but get a little weight put on their own experience]? Because at some point, risks are just so small you say “you know what, I can’t give it any weight … assign it some larger group value.”

I don’t know that combining a bunch of classes that didn’t have enough occurrences (2) to be considered credible [so they got the class group average], with 98 potential exposure values, suddenly gives that group enough credibility to say “yeah, here’s the exposure relativities you should use across all 98 potential exposure values” and enough credibility to say “here’s the min/max exposure values for each of those classes, and each class has different min/max values.”

GL rating for premises/products is done at a location level, not at an aggregate level. Even a company with $20M in gross sales probably isn’t doing that at one location. It’s getting spread across multiple locations. Each location would be rated on its own exposures. No one is (should be) rating GL at a company level, given it’s almost certainly rating commercial property at a location (and building) level, unless it’s pricing the risk as a [really] large account - in which case, it’s not using location and class rating in the pricing.

This could be true. It could also be true that larger risks are systematically under-reporting exposures. Or, larger risks are reporting accurate exposures and underwriters are under-stating it for competitive reasons when settling on premiums.

And that’s my problem. Like a lot of people who do teh analytics I’m not buying the results when looking at things at a high level.

It may also be that whatever credibility function they use doesn’t have quite the right shape, and as a result, large accounts aren’t getting enough credit.

1 Like

Yes, the effect of a risk management department (or lack there of) would show up partially in each individual company’s experience mod in the existing rating plan. Experience rating plans are great in that they can catch effects that are not explicit in the manual rates. But you could modify the manual rates to add a systemic effect in the data, and that would move some of the signal out of the mod and into the manual rate.

Credibility is not really measuring the same thing here as a size of loss variable. Credibility is determining how much signal is in the prior experience relative to what is expected in the manual rate. The size of loss variable is modifying the manual rate itself.

It was just an example to illustrate that the losses aren’t scaling directly in proportion with exposures.

The larger risks would have to be over-reporting their exposures for their loss relativities to be less than expected. There could be a multitude of reasons as to why the effect is in the data, but no one really knows, and ultimately it comes down to human judgement to rationalize or reject the change in the model.