Fall 2020 Exam 7 Sitting

That’s a nasty bug that if true they need to fix. I have a compulsive habit of selecting multiple cells when I’m thinking.

I’ve been practicing with a mask on (in my house). Ridiculous, I know, but it’s 2020.

Just to be clear when I said “highlight” I meant fill. I don’t think selecting multiple cells should cause problems.

That’s good, I don’t think in this exam we have the need to fill a large swath of cells.

Yeah I thought about filling in my answers with yellow when I finish each problem, but I’ll probably just avoid that now.

will probably do the same lol

I’m also sitting this week. I’m saving 2019 as a full dry-run practice exam.

That being said, I thought the 2018 test was ridiculous. There were so many questions where they tested your algebra ability rather than how well you knew the concepts. For example, there was this least squares question where you had to back out what y-bar was, I think? In what world would you actually need to know this? Also, there was one where they did not provide you the theta that you need for the loglogistic distribution, and you had to back it out. Again, in what world would you not have this parameter MLE’d? So dumb.

Algebra is the one thing that is significantly slower in Excel, and you really have to type extra to show your work. I haven’t done 2019 yet, but I hope this isn’t a trend of things to come.

I feel exactly the same. I just finished 2018 and failed. For me, it’s generally harder than older exams and algebra just makes it much worse. If this sitting is just like 2018, I am definitely not passing. I’m saving 2019 for the day before the exam. I hope 2019 will bump up my confidence. I’m not very optimistic now.

The CAS has had a trend of testing unnecessarily complicated algebra more and more in the recent years and thinking that it’s testing a higher level of Bloom’s taxonomy. I think the reason they do this is they don’t like to repeat exact questions with just different numbers and there’s only so many ways you can ask a standard Brosius or Clark problem. For those 2 problems in 2018 I worked them out mostly on paper before typing them into Excel.

I do think once you get more comfortable with excel, problems like that are still fairly quick to do though. And a lot of the questions are sped up so much by using Excel time really shouldn’t be an issue this sitting.

2018 was difficult and even though the algebra was messy, I don’t think you need some crazy level of understanding to be able to solve those problems. If you know the formulas/concepts and see the information given, it’s totally reasonable to get the solution.

I have today and tomorrow off work before sitting on Thursday. Like I said before I’ll be working on writing up a somewhat brief syllabus overview and things I think you should know going into the exam then posting it here. It’ll just be based off what I’ve seen on past questions and other topics I think are fairly testable. Good luck with the studying everyone!

I’ve finished the first part of my write-up. Gonna take a break and get back to it later today.

Brosius:

  • Formulas for a and b for least squares regression.
  • If a < 0, use chainladder.
  • If b < 0 use budgeted loss.
  • When b = 1, the result is just the BF result.
  • Least squares assumes the book is stable.
  • Formulas for Z, VHM, EPV.

Mack 2000:

  • Know the formula for Benktander reserve and that it’s the 2nd iteration of the BF.
  • MSE of Benktander is almost as small as optimal credibility MSE in most situations.

Hurlimann:

  • Calculation of m’s based on incremental loss and EP
  • Calculation of p’s and ELR based on m’s
  • Credibility for Individual, Collective, Benktander, Neuhaus, and Optimal reserves
  • Optimal reserve is for when Var[U_i] = Var[U_i,BC], when this doesn’t hold, the credibility should be higher
  • Formula for MSE of the reserve for a given year in terms of E[alpha^2*U_i], Z, p, q, and t

Mack 1994:

  • 3 Assumptions of Chainladder (3,4,5 correspond to the expected loss, AY independence, and variance)
  • Weights multiplied by variance of C_i,k+1 is proportional to C_i,k^2
  • How to calculate residuals for different variance assumptions
  • How to calculate alpha^2 and MSE(C_i,I) (this hasn’t been tested since 2011)
  • How to calculate the last alpha^2 based on the prior ones (never tested)
  • Formula for sigma^2 of lognormal distribution and formula for lognormal confidence interval
  • Graphing C_i,k+1 vs C_i,k and whether the line is a good fit is a test of assumption (3)
  • Graphing the residuals vs C_i,k tests if assumption (5) is good. Residuals should vary around randomly zero without changes in magnitude
  • For the CY test (assumption 4), know how to calculate Z, E[Z_n], c_n, m, Var(Z_n) and do the test
  • For the correlation of adjacent development factors test (assumption 3), know how to calculate S, T_k, T, E[T], Var[T] and do the test
  • Know how to find the empirical confidence interval and that it understates variability for older ages and overstates it for younger ages

Venter:

  • Essentially same 3 assumptions as Mack (3,4,5)
  • Test of assumption 3: do a regression and parameters are significant if their absolute value is at least twice their standard deviation
  • Test for correlation of development factors, formulas for r and T. Note, Venter says this tests assumption (4) while Mack’s correlation test tests assumption (3)!
  • For the above test know we’re more interested in correlation that’s prevalent throughout the entire triangle than just a pair of columns and that some columns will test as correlated just by chance
  • If variance is constant or proportional to expected loss, know formulas for f, h and how to calculate them iteratively (never tested before)
  • Know formulas for adjusted SSE, AIC, BIC (I think never tested before)
  • Know how to count parameters for chain ladder, cape cod, BF, grouped BF and that n excludes the column in the first development period
  • You could learn the significantly high/low diagonal test but it’s never been tested and I’m skipping it.

Clark:

  • Calculation of age based on average age of the loss.
  • Calculation of G(x) for loglogistic and weibull and how to truncate at G(x_t).
  • Calculation of unpaid loss for both Cape Cod and LDF method and the number of parameters for each.
  • Calculation of process and parameter variance given the appropriate values.
  • Calculation of sigma^2 (N includes all points unlike Venter, also know you do this with incremental losses)
  • Calculation of the residuals once you have sigma^2
  • Cape Cod uses additional information of the premium, so it should have lower parameter variance
  • A large benefit of the Clark method is data doesn’t need to be in triangle form since we just need the AY and the amount of loss paid between any two ages to do our fit.

Patrik:

  • 7 problems with reinsurance reserving
  • 6 parts of reinsurer loss reserve
  • 4 steps in reinsurance reserving
  • Stanard-Buhlmann (Cape Cod) Method reserve calculation and how to use a credibility weighted Stanard-Buhlmann/Chain Ladder
  • On-leveling premium can be challenging for reinsurance so this is a drawback of the Stanard-Buhlmann method

Shapland:

  • Calculation of residuals from the fitted means and actual values
  • Know steps (0) through (5) of the bootstrapping process
  • Calculation of phi in terms of residuals, N, p (note N includes first column again unlike Venter)
  • Given alpha and betas, know how to calculate the fitted means in each cell
  • In ODP case mean result is just chain ladder but still useful since it gives us distribution around on loss reserve estimate
  • GLM over ODP advantages are you only need to use enough parameters to fit data, CY trend can be included and irregular data shapes can be used
  • GLM drawbacks vs ODP are GLM must be solved each iteration and results are no longer explainable as LDFs
  • For negative incremental losses, -ln(-q) can be used but if entire column sum is negative, subtract lowest negative amount from whole triangle, model, then add back in (note if incr loss = 0, just use 0 since ln(0) is undefined)
  • If values are missing they can be estimated from surrounding cells, but then those residuals should not be used
  • Outliers should be removed if they don’t actually represent the variability of the losses and are not expected to happen again
  • Know 3 ways to adjust for heteroscedasticity (I would be very familiar with stratified sampling and method 2 (applying the hetero-adjustment factors))
  • If the first development period isn’t a full year, GLM will still forecast full AY so for most recent AY you need to adjust the final value
  • If the last calendar period isn’t a full year, calculate annual LDFs without last diagonal, annualize the diagonal, then after modeling de-annualize the most recent diagonal
  • If exposures are changing make sure to divide by exposure instead of using loss dollars
  • When plotting residuals if they don’t behave as expected, in the GLM case you can change your variance assumption or add/remove parameters
  • If doing multiple lines of business with correlation you can do location mapping or resorting (location mapping assumes historical correlation assumption by taking residuals are the same part of the triangle, resorting is more flexible)
  • There’s a lot more detail in parts of Shapland but it hasn’t really been tested and the above is what I would focus on knowing first.
2 Likes

Very nice. Looking forward to the Brehm write-up lol.

Annoying that it’s not made explicit in the paper (from what I’ve seen so far), but plowback ratio = 1 - dividend payout ratio, correct?

1 Like

growth = ROE * (1-dividend payout ratio) since it is any left over funds which are not paid out that are ‘plowed back’ into the company therefore you are right. (1-dividend payout ratio) is the plowback factor.

That is correct and I thought it was made explicit?

Yep, that’s right. For the DDM.

For the FCFE model g = ROE*reinvestment rate since there aren’t any dividends in that case.

Piface and Kippy… we’ve got more and more of the old gang

3 Likes

It is nice to see you all again.

1 Like

Maybe we should grab a drink at some point after this exam sitting is all over. I’m tired of covid killing my social life

Definitely down for that. I’m really surprised I haven’t ran into you at Giant Eagle at all.