Pearson failure today

I thought it had to do with Pearson updating their software, and running into a problem, or something along those lines. I really don’t think “too many CAS candidates” is likely to have been relevant. They have a lot of other clients.

1 Like

Maybe they were just ahead of the curve on the Crowdstrike update

5 Likes

Timing of a change in contract with AWS or whomever to provide a certain amount of memory or other resource to Pearson?

(I don’t know that to be the case; just offering up one possibility.)

That’s fair, I haven’t seen any post-mortem on what caused the issue beyond the official statement from Pearson.

Eh, I would argue that it should be your knowledge, not your preparation.

If you took the exam in 2022 and I took the same exam in 2023 I probably had access to better quality study guides than you. Through no fault of your own, it should be easier for me to pass than for you to pass if the pass mark is based on knowledge rather than a predetermined number of passers.

That said, I suspect both societies have exams get increasingly more difficult to counterbalance the better study materials. Which IMO is wrong and undermines the concept of the MQC.

Bruce definitely talked about looking at the results and seeing where the breaks were. While I don’t agree that should be a factor in setting the pass rate, it pretty clearly was. At least for the old jointly administered exams.

The method of deciding the probability that a MQC would get a question right seems marginally better.

Easy question… 96% of MQCs will get it right and 4% knew how to do it but made a dumb error they will kick themselves over when they see the solution.

Hard question… 45% of MQCs will get this one right.

0.96 + 0.45 + …

That’s your pass mark.

ETA: Oh, on written answer exams you have to weight by the points but I think the idea still holds. Maybe 80% would get full points and 5% would get all but one point and so on & so forth. More numbers to crunch as there are more possible point distributions.

A shower thought I had this morning:

In a world where AI is now generating content and scripts for content, wouldn’t there be a certain risk to using open-book exams?

(Or…is demonstration of a skilled use of AI to analyze and summarize a sufficient qualification?)

Final announcement on the exams by CAS: Final Announcement Regarding April/May 2024 Exam Sitting | Casualty Actuarial Society

Note that their front page still doesn’t have their announcements sorted chronologically:
image

There’s very little information but basically:

  • All May 1st candidates were offered a retake even if their exam didn’t have any complications (I believe this was already known to all)
  • No differences in pass mark were considered for May 1 takers

This follows what I assumed was the process. There was no difference in grading, except that some people were graded on two attempts.

1 Like

I’m surprised there was nothing from Pearson’s end that could be used to suss out issues like, “Exam was intended to start at 8 AM, started at 11 AM” or “exam was intended to last 4 hours but the sitting took from 8 AM - 3 PM.” There’s nowhere for Pearson staff to enter comments in the case of an unusual sitting? Feels like CAS could search for a more competent company if this one keeps failing and is incapable of assisting in the wake of another major systematic mistake. And what about post-exam surveys?

Also surprised there wasn’t at least some voluntary attestation about “I had issues that affected my sitting”, but instead just every May 1st sitter got a second chance even if their sitting had no problems.

No mention whether the MQC mark was adjusted due to the cohort of people getting a second pass at the material. We only know that May 1 and non-May 1 sitters were graded the same.

2 Likes

The more I think about it the more conflicted I am. part of me wants an adjustment for the May 1 sitting but another part of me understands that the non May 1 people would have gotten the same mark for the sitting if the whole fiasco didn’t occur.

In a way the CAS is saying that we are rewarding those who suffered not punishing those that didn’t. Would I rather sit in a test room panicking if that means I get to look at an exam twice, :100: . But at the end of the day, no one got a lower mark because of what happened.

1 Like

After thinking about this overnight, I still would like an explanation from the CAS why they couldn’t have a backup exam pulled from their bank of questions that they have been building for the last 5 years.
Granted 7 & 9 might not be feasible due to the changing syllabus, but MAS 1, 5, and 6 all have had pretty stable syllabi for the last few years and there should have been either questions available or an explanation as to why question were not available.

3 Likes

Someone was saying that the CAS did have a back up exam but Pearson administered the wrong exam

Honestly I wouldn’t be surprised if the CAS had nothing prepared

3 Likes

The syllabus was only changed so much. I could have settled for an exam that was 75% comprised of prior problems, plus 25% of the same exam a second time - if indeed the CAS had no backups ready.

It seems like before introducing new material you could develop 2 sittings’ worth of questions for it, but hindsight is 20/20. Nonetheless, it seems reasonable and possible to give a re-administered exam that’s more new from prior exams than it is repeat.

Unfortunately CAS said they’ve made their final announcement, so clearly they don’t want to give clarity or receive feedback.

2 Likes

Looks like that’s changing.


Source: New exam security measure for April 2025

And for those of us who don’t speak British English and wonder what “remote invigilation” means:


Source: IFoA enhanced exam security measures for 2025

Hope it goes better than the CAS’ attempt at the same

1 Like

From what I have been told they will be using AI-assisted software to parse through the many thousands of hours of video to check for outliers.

I think the CAS tried proctoring via video before the technology could be used efficiently (too much data to store and then parse through) so it predictably ended badly.

Still a risk this goes badly too (I have yet to meet anti-cheating controls that cannot be worked around if done remotely), but will be watching the whole thing with interest.

The IfoA was forced to go down this road due to quite a lot of cheating being reported from non-UK sources in Asia (this seems to be a real problem in that side of the world).