CAS Exam Philosophy Discussion

4 posts were merged into an existing topic: Fall 2020 Exam 7 Sitting

And dependent on the given graders and the exam answers they see.

The MQC is one of the examples where the CAS screams transparency, but in reality they just look at statistics and make a decision how many people will pass a given test based on judgment.

Can we talk about the reasoning behind the CAS testing algebra on fellowship exams? I mean, honestly, where do I even begin on that? No words.

I think what they are trying to see if you understand the reasons behind the formulas. The brosciuos exam 7 question from I believe 2018, I believe did an okay job at this.

If you didn’t understand the fundamentals then you wouldn’t be able to figure out the formula to setup the algebra. But then it becomes an annoying algebra problem.

The cape cod one from that year they had two unknown variables and one cancelled out, and it left you with one of those gnarly formulas that x^2 + 1/x^2 in the equation. I believe I solved this by trial and error since x was the ELR. So you knew where it had to be around.

But that was a pretty terrible question.

1 Like

A post was merged into an existing topic: Fall 2020 Exam 7 Sitting

“I don’t think so, and I think people make the issue too easy for themselves. If you increase the time then you’ll also pass people who aren’t qualified just because they were able to figure things out in the moment.”

Ah, so here we’re coming to a fundamental disagreement. These exams aren’t multiple choice. You can’t just guess and check the choices until you get a right answer. If you’re someone who didn’t memorize the formula for the credibility of a claims free year from Bailey and Simon, but instead spent 10 minutes deriving it from first principles, I don’t consider you any less of a qualified actuary than the person who memorized the flash card and took 20 practice exams and knows how to answer it just by glancing at the table (I’m the latter one btw). Knowing how to answer that question faster and quicker isn’t the same thing as being the more qualified actuary.

"Assume the CAS wants an exam of difficulty where 40% pass. They accept some % of the group is sufficiently qualified but don’t know who. All the complaints posit that we’re passing lots who we shouldn’t and missing a bunch who we should. How can we change the exams to keep the % passing constant while more effectively sorting.

Just opening the floodgates doesn’t make your exam more effective, it just increases your type 1 error."

So a couple of points here. I think your premise if flawed. The CAS doesn’t even claim this to be the case. Why should there be an arbitrary 40% pass rate? Why are you assuming a coherent rank-ordering strategy of “qualified actuary” even exists? In the extreme case where everyone who is sitting for exam 9, having passed 9 other very difficult actuarial exams already, is actually qualified (not that extreme when you think about it), why would you want to rank-order them and only pass the top 40%? Isn’t it pretty obvious that any such rank-ordering will be fairly arbitrary? i.e. if they are all qualified actuaries, we can rank order them based on their ability to solve algebra problems very quickly, and result in a given 40% cohort. But we can also rank order them on their language and grammar usage (which is even more relevant for a professional actuary) and come up with another rank ordering. Or we can rank order them on their ability to solve complex logical puzzles, and have yet a 3rd ordering. Neither of these 3 methods are any better at differentiating what the test is allegedly a measure of, is the person a qualified actuary.

By constraining yourself to a set pass mark you are necessarily making an arbitrary choice that results in a worse ordering. Making your test actually be about what it’s supposed to be doesn’t increase your risk of type 1 errors. Sticking with an arbitrary ~40% pass rate and testing algebra speed does increase your type 1 errors though.

As an example: Last years test had a really dumb problem where we had to solve a system of 3 equations to get the answer. Some people did this quickly. Some people did this slowly. Then we get to the last question of the exam. Imagine the person who did this quickly gets to the last question of the exam and says “Wow, I have no idea how to answer that question, I don’t even understand it.” Whereas the person who does it slowly runs out of time before they even read it, but had they had time they could have answered it for full points. The CAS then looks at their exam statistics and says “wow, question 20 sure was difficult, so many people didn’t even attempt it, I guess we’ll set the MQC relatively low for that one.” The fast algebra guy might score higher, despite not being fully qualified. Any time you’re testing on some other variable you’re going to get these kind of errors.

In exam 9 they were much more puzzle focused, no algebra. They are shying away from this.

I believe exam 9 two/three years ago had an algebra question with three system of equations. And you had to do it twice in part a and part b both worth 1.5 points. From what I heard is that those questions took forever.

Being able to do algebra is part of being an actuary? Explain.

Algebra falls under puzzle imo.

I can confirm. I was pretty certain that I had failed the exam because of this question. Thankfully, I ended up scraping together enough points to still get the pass.

Hi, I’m an education and exam volunteer, and I’m interested in exam philosophy. I’ve just browsed this thread.

Well, when you had to do all the calculations by hand, that sounds like a huge waste of time. But now that you have a spreadsheet available, that sounds like a terrific question. Blindly following some method without adapting it to the data you actually have is a pretty common failure mode for inexperienced actuaries, and it is a really dangerous failure mode, because you think you understand something about the problem, but you are wrong.

While I’m sure there have been grading failures of that type, I can tell you that as a grader I have seen answers I didn’t expect and decided they were also right and gave them full credit.

Umm… seriously? I do algebra all the time as an actuary. I tell people that being an actuary is a terrific way to monetize being really good at high school algebra.

1 Like

That exam 8 question was when we had to do it by hand…

I get that. And I agree that the exams are often too long. I hope that moving to a spreadsheet will help with that in a lot of ways. Including allowing more questions similar to what you describe.

It’s a constant struggle to get exams to fit in the time constraints, and it’s a struggle that often isn’t successful, imo.

1 Like

I don’t write upper levels, but you’d be surprised how much time we spend reviewing the questions even for multiple choice. And if there’s even any ambiguity with interpretation, we likely throw the Q out/give free credit.

@Lucy, from your experience, was it common for them to make across the board adjustments to the mqc, passmark etc. based on how candidates performed? For example if a question was badly worded or the exam was too long. Take us inside what the process would hypothetically look like now (a week or so before results). What are they potentially doing with the diagnostics (hypothetically of course)? My question isn’t directed at this sitting. And of course, not asking you to reveal anything confidential.

As I understand it, there isn’t any “across the board” adjustments. If a question was found to require an adjustment to the MQC score (I’ve never heard of an “upward” adjustment, FWIW), then the MCQ threshold is just changed; but no candidate’s score is changed–including for the item in question. But that change could result in someone going from a (barely) not-passing to (barely) passing. But this latter isn’t reviewed, it just happens.

As for the “diagnostics” . . . it is likely the scenario that something “odd” (however that’s viewed by the CAS Board) was seen and asked about. The Exam Chair may have insight and can answer right away . . . or they may not have thought that “oddity” initially important and now has to spend a bit of time looking at things to answer the Board’s question.

And given that many companies is still processing year-end results and trying to get Opinions drafted and ready . . . that could add some additional time for everyone to respond.

The MQC score starts as a score for each item, and the numbers are added up.

I know that if a question is found to be defective, or harder than expected, or… there can be an adjustment to the MQC for that item, which changes the MQC overall.

I assume there is some sort of overall adjustment when the exam is too long, but I’m not familiar with any details. I will say, based on public information plus my observations as an exam proctor, that exams that look too long when I proctored them tended to have fewer candidates pass than exams that looked to be the right length. So my bias is that whatever adjustment they’ve made for “exam was too long” is smaller than what the lack of time costs candidates. I suppose if your philosophy is that you care more about making sure each passing candidate has demonstrated competency than about fairness to candidates this is the right choice. Once an exam is broken (for instance, by being too long) there isn’t any way to fix it that’s fair to all the stakeholders.

Yes. Graders and exam mucketymucks volunteered assuming that they needed to have a certain amount of time free to deal with exams in November. When that got moved to Dec/Jan, some of those people no doubt ran into unexpected conflicts with other priorities. I expect that’s added something to the processing time.

Ok. Thanks @Lucy @Vorian_Atreides.

I will say that not releasing the exams is really intended to build a question bank for better / more frequent CBT exams.

That’s different from the inability to differentiate qualified vs. not qualified actuaries. [Discussion on what a “qualified” actuary in a context other than SAOs omitted.] I might amend the statement to say that the CAS exam system does a poor job of differentiating those who really know the material and can properly apply it from those who know the material but can’t properly apply it. What we have now is a system designed to cater to those who came through actuarial science programs and are good at test-taking and math … which is great, but it’s not what the actuarial field traditionally represents.

Don’t worry, though, because … unicorns.