To start off, I am an FSA so please go gentle on me, particularly if I mess up statistical terminology

I found myself needing to do what’s described in the topic at my work (long story). The Lambdas involved can get very small, and I am observing that in the simulated set, the mean for example can deviate quite far from lambda.

Happily for me, my wife is an FCAS and she mentioned reading about some kind of small frequency adjustment for Poisson for exactly this kind of situation in one of the CAS exams. It was something she never understood and took the “pray that it doesn’t come out” exam taking strategy, although she has seen it applied in one of her workplaces when modelling natcat.

Sadly for me, she has long since thrown out her old study notes, so I am just wondering if anyone here is familiar with this “small frequency adjustment” and able to point me to any relevant papers about this topic? Google search doesn’t work well at all.

The expected value of the sample mean is an unbiased estimate to the population mean. With small lambda comes low observations. So what is there to adjust?

Rule of thumb is that at counts of about 20, total count is approximately normal. depending on definition of lambda, it may not matter that it’s small if there is enough exposure. below that, median will not equal mean and any confidence interval on expected value will be asymmetric. and a posterior credible interval will be trickier too, although i don’t remember off hand what it does.

There is this article out there that I’ve used in the past to estimate probabilities when simulations get hard because of floating point issues due to small lambdas.

I did consider brute forcing my way through by increasing the number of sims.

As my FCAS wife mentioned a vague recollection of seeing a more subtle method in a section of the CAS syllabus that she found incomprehensible, plus seeing actually implemented in practice by the uber-technical chief actuary in one of her places of employment, I just thought of tapping into the power of the collective CAS hive mind here

Now that was for a binomial distribution where p is very small, but N*p is a relatively reasonable number. Look for the Stirling approximation/Poisson tab. You can probably adapt.

If it’s an issue with monte carlo simulation you might be able to practice some kind of stratified simulation.

For example if the count can only realistically be 0 or 1 then simulate everything else with count = 0, then again with count=1, and take a weighted average of the two.

Something else i vaguely remember is that some of the old exam material was about “collective risk models.” I think this was written when computers were less powerful, and gave some analytic formulas for summing independent claims. This may be what your wife was remembering and you could search for those terms.

Perhaps you could explain a little better what you need or what you are looking for. What do you mean by “small” lambdas? I am assuming that you mean lambda is smaller than, perhaps much smaller than 0.25.

The poisson is a pretty easy distribution to work with in excel. If your sample mean is different than your lambda, then you have likely made a mistake in your simulation math.

I think Excel (and other softwares) in general can suffer with very small numbers I think related to storing them in binary.

For example I discovered a few years ago 760*(1-0.9875) gives you a different result in excel than 760*0.0125, although it’s only evident at like 10 decimal places.