The Problem With Exams - seeking feedback

Hey actuaries, can you help me out?

I’ve drafted an essay that needs feedback. I’m happy to send it to you directly, not really interested in publishing here just yet. Titled “The Problem With Exams”, I discuss certain aspects of the examination system (in general) which seem to be significant deficits.

I’m not sure yet what I plan to do with this, but in order to refine the arguments and consider whether I may just be looney-tunes, I’d appreciate feedback from a wide audience. Bonus if some current exam-takers, some recently-qualified, and some longer in the career all get a chance to read and reply. Plus I’m SOA, so if any CAS folks (or IFoA or IAA pros) want to take a look, that can only help, because the issue isn’t SOA-specific.

Send me a message with your email address or post in the thread or find me on LinkedIn: This text will be blurred


I’ll take a look. I make no promises on the value of my feedback.

actuary1695814 [at] gmail dot commodore sixty four

Also, at c. 2:25 pm CST I sent you a linkedin invite…just so you know who that is.

1 Like


I’ve sent feedback on CAS Exam procedures in the past–with varying success.


Woof, guy is sexy.

Hey Bro, just curious if your essay touch on how the current CBT process is very much a mistery to candidates? Especially on how CAS generate the versions of an exam, and how it determines the difficulties/time for an minimally qualified candidate to solve it? I feel how CAS determines and generates exams could potentially create too big a variance to candidates to be considered fair. (Questions for CAS upper isn’t like SOA prelims where a multiple choice question takes about 6 minutes to solve. )

I have registered an exam in May. I was thinking of writing something similar to CAS after my exam is done. But if your letter touches on that, that’d be awesome since I wouldn’t need to write it then. haha.

No, it’s actually a very different perspective. More towards the philosophy of exams as a condition of actuarial employment, when the exams teach a very divergent set of skills from what is stated as desirable for “fully qualified actuaries”.

1 Like

Aww, bummer. If there a template or anything that you used to draft your letter? Does it have to follow any form or anything?

I guess in my case I don’t have too much to write, except that I think candidate could really benefit from a bit more open communication from CAS on the RNG aspect of the exam. How CAS ensures the RNG exams are fair to each candidate.

You should probably just draft that as a letter to the Examination Committee. If you can find a couple of your peers who feel the same, get them to read the letter and co-sign with you. The Committee will generally take letters that come from multiple people, or multiple letters on the same topic, more seriously than just one.

I would be willing to take a look, and that goes for @Red as well.

I’ve helped various people in the past contact both the SOA and CAS with respect to specific exams, the general process, exam changes, and other things as well.

My main issue with exams is what is appropriate for exams and what is not, as well as what is appropriate for a professional organization to certify for professional credentials and standards, and what is not.

I often look at what is going on with other certifications and professions to get an idea – not that we have to do the same thing, but looking at what sorts of approaches accomplish what sorts of things.

What, exactly, are we trying to certify?


@Red, feel free to also email your proposed communication. I have an educational research background (lots of work on assessment and curriculum development) and am very familiar with Bloom’s Taxonomy in how it should be interpreted and utilized.

1 Like

Are you aware of any other professional organizations that gives different exams to different candidates at upper levels?

The only thing similar is perhaps the CPA in Canada, where it’s a 3 day exam, and one part of the exam, candidates for different path are given a case specific to their track (e.g. tax, auditing, financial reporting). However, the case is the same for all candidates within each track.


Thank you @Vorian_Atreides & @meep . I am writing my last exam in May, and I will starting giving the question a serious thought as soon as I am done the exam. It will likely be a short one, as I am merely asking them to be more open in communicating the CBT processes to candidates.

Though I would say if they are indeed just giving a random lot of questions to random candidates, I would strongly encourage them to change that practice. I think fairness to candidates are important for all professional exams. Unfairness, whether intentional, or a byproduct of a process, should really be eliminated as much as possible. Question bank and what not maybe fine, but you shouldn’t use candidates as your beta tester for their difficulties. I would really hope this is not the case, as that would require a lot of effort, and I would rather spend time with a certain member of my family that would soon join the world. :slight_smile:

Might consider that to practice medicine, you often have to pass several additional exams to be in a position to take the board exams.

But that line of thinking isn’t going to produce a better outcome as you’re now asking for a week-long exam to test most of the material on the upper exams.

1 Like

Devil’s advocate: What if the upper is now 3 day affair, but there is only 1 upper rather than 3? Rather than the current format, it would just be a case study for each of the 3 subjects?

Alright… I’ll probably get stoned by the angry mob formed by other candidates now…

Those sorts of set ups are often a “pass all or none” . . . so do you want to study for everything if you are weak on one area by the sitting?

I’m skeptical that the variation from sitting to sitting on upper level FSA exams is any fairer than drawing from a pool of similar questions on the CBT.

No, I do not… :rofl: that is a great point.

@Tiffany To me, it wouldn’t be a problem if everyone gets the same set of questions from a pool.

If everyone gets a different set of problems from a pool, how would you ensure it’s fair/comparable in difficulty? How would CAS assess the difficulty? I think (I might be wrong) a written question might be harder to assess compare to a multiple choice question. What about the IQ question? Not sure if all candidate gets the same IQ question, or a different one. How would CAS catch defective questions? If a CAS question was defective, and was only discovered after a few sittings, what happened to candidates that lost points due to that question? Just SOL?

SOA’s has years data building up a question bank, likely with results from candidates over a period of time. I believe each sitting they would introduce 1 or 2 new questions for a given exam? CAS does not have this luxury, and I think it’s a bit cruel for candidates that prep for the uppers to be the guinea pig for a few years?


Medical boards differ, and then don’t ask me about what PhD math qualifiers are like (seriously, don’t ask).

That they differ is not the issue. I have been on the other side, and I know what metrics can be used to see that you’ve got equivalent/valid exams. Standardized exams have been around for over 100 years, and it’s not that they give everybody the exact same questions.

That said, given that CAS has been known to whiff the operational aspects of the exams many times before, I wouldn’t be surprised if they screwed the pooch yet again.

The area you’re looking for (if you want to know the types of metrics used) is called psychometrics. I can’t tell you either what the SOA uses (because I partly know and am not allowed to tell) or CAS uses (because I don’t know), but it’s not like they come up with new metrics.

Some example metrics can be seen here:

1 Like

Er… Sorry Bro. Did not mean to hijack your thread…

@meep: I think I have seen you post these before (in the old AO) but never read up on it. I will after the exam this time.

1 Like

Not for QFI they don’t - each exam was truly unique when I was writing them.