Insurance Actuaries in the Age of Agentic AI

An Open Letter to the Casualty Actuarial Society and the P&C Actuarial Profession

To the Board of Directors of the Casualty Actuarial Society, the leadership of the CAS Institute, the members of the Syllabus and Examination Working Group, and the actuarial educators shaping the next generation of P&C professionals:

This letter is written with deep respect for the institutions you steward and with genuine urgency about the gap between what those institutions currently teach and what the profession will demand within the next three to five years. The rise of agentic AI is not a distant abstraction. It is the near horizon. And the actuaries you are credentialing today will inherit a professional landscape that looks fundamentally different from the one your current syllabus was designed to serve.

I write this not as a critic but as a practitioner who believes the CAS and its sister organizations are uniquely positioned to lead this transition, provided they act with the same rigor and foresight they have always applied to the science of risk itself.


The Philosophical Role of the Insurance Actuary

For over a century, the insurance actuary has occupied a peculiar and philosophically interesting position within the financial system. They are not the salespeople. They are not the claims adjusters. They are not the ones negotiating agency contracts or shaking hands at industry conferences. And yet, without them, the entire edifice of insurance, the promise to make good on loss, whether from a fender bender, a hurricane, a product liability claim, or a complex casualty occurrence, would rest on little more than instinct and hope.

The insurance actuary is, in the deepest sense, an applied epistemologist: someone whose job it is to discipline what a company believes about risk and to translate that belief into a price, a reserve, a capital requirement, or a strategic recommendation. They sit at the uncomfortable intersection of mathematics, judgment, and commercial reality, mediating between what can be known, what must be estimated, and what must simply be accepted as uncertain.

Consider an analogy. For centuries, cartographers held enormous power. They decided what was mapped and what was left as terra incognita. They chose projections, scales, and annotations. A Mercator projection tells you something very different about the world than a Peters projection, though both are “accurate.” The cartographer’s authority came not merely from technical skill in surveying, but from the editorial judgment embedded in every map they drew.

The insurance actuary has long played a similar role. When an actuary selects a loss development pattern, chooses a trend factor, decides which years of experience are “credible,” or determines the appropriate tail factor for a long-tailed casualty line, they are making cartographic decisions. They are drawing a map of risk. And like all maps, actuarial models are reductions, useful precisely because they leave things out. Alfred Korzybski’s famous dictum applies with full force: the map is not the territory. The actuary’s professional value has always been in knowing which simplifications illuminate and which ones deceive.

This is true whether the actuary is pricing a personal auto program, reserving a book of professional liability, building a catastrophe model for homeowners, designing a reinsurance program, or advising a carrier’s board on capital adequacy. Across every line of business and every function, the actuary’s core contribution is not calculation. It is the disciplined exercise of judgment about uncertainty.


What Changes When Everyone Has Agentic AI

Now imagine that every cartographer in the world is suddenly given a satellite with real-time, sub-meter resolution imagery. The technical barrier to creating a detailed map collapses overnight. Does the cartographer become obsolete? No. But the source of their value undergoes a radical shift. When everyone can see the same terrain in exquisite detail, the scarce resource is no longer observation. It is interpretation. It is knowing what the map is for, which features matter for which journeys, and where the satellite imagery itself might be misleading.

This is precisely the transition facing insurance actuaries across the entire P&C spectrum.

For decades, actuarial work has been inseparable from computational labor. Building a rating algorithm, running a reserve analysis, fitting distributions to loss data, constructing credibility-weighted estimates, performing catastrophe model analyses: these tasks required genuine skill, and the ability to perform them well was a meaningful source of competitive advantage. A carrier with better models, faster analytics, and more skilled actuaries could see risk more clearly and price it more accurately than competitors. A reinsurer with superior portfolio analytics could identify opportunities and avoid adverse selection. A consulting actuary with deep technical fluency could deliver reserve opinions that stood up to regulatory scrutiny.

Agentic AI disrupts this in a way that prior waves of technology did not. Spreadsheets automated arithmetic. Programming languages automated repetitive modeling. But agentic AI automates the reasoning chain itself. An agentic system does not simply execute instructions. It decomposes a problem, determines what data it needs, selects appropriate methods, executes them, evaluates the results, and iterates, all with minimal human direction. PwC’s 2025 Global Actuarial Modernization Survey found that less than 50% of actuaries currently demonstrate proficiency in data science and AI, yet over 60% recognize these as critical skill gaps they must develop. The technical moat that once separated a sharp actuarial team from an average one is, if not eliminated, dramatically narrowed.

The philosopher Michael Polanyi drew a famous distinction between explicit knowledge, the kind that can be written down and codified, and tacit knowledge, the kind embedded in practice, intuition, and bodily skill. “We know more than we can tell,” he wrote. For years, actuaries have relied on a blend of both. But agentic AI is remarkably effective at absorbing explicit knowledge and even simulating aspects of tacit knowledge through pattern recognition at scale.

Now imagine an insurance market in which every participant, every carrier, every broker, every reinsurer, every MGA, every regulator, has access to comparable agentic AI. Models converge. Loss picks narrow. Rating algorithms optimize toward similar structures. Technical pricing, once a source of differentiation, becomes nearly homogeneous. This is a version of what game theorists call a symmetry condition: when all players have equivalent information and equivalent analytical capability, the game changes fundamentally. If agentic AI pushes insurance pricing toward this kind of informational symmetry, then the actuarial function, conceived purely as a pricing or reserving engine, approaches a kind of commoditized equilibrium.

But insurance is not, and has never been, a frictionless market. It is a market defined by relationships, regulation, ambiguity, long-tail uncertainty, and asymmetric information that no model can fully resolve. The insured knows things about their risk that the carrier does not. The carrier knows things about portfolio correlation that the insured cannot see. The cedent knows things about their book that the reinsurer cannot observe. Policy wordings contain latent ambiguities that only surface after a loss event no one anticipated. Regulatory environments differ across jurisdictions in ways that profoundly affect pricing and reserving. Social inflation reshapes jury behavior in ways no historical model can anticipate. These are not problems of computation. They are problems of meaning, and they are precisely the problems that resist automation most fiercely.


From Technician to Philosopher of Risk

The German philosopher Martin Heidegger warned that the danger of technology is not that it fails, but that it succeeds so thoroughly that we begin to see the entire world through its frame. He called this Gestell, the tendency of technology to reduce everything to a “standing reserve,” a resource to be optimized. When the entire insurance industry views risk through the lens of agentic AI, the danger is not that the models will be wrong. It is that everyone will be wrong in the same way, because everyone is using the same frame.

This is where the insurance actuary’s role transforms from technician to something closer to a philosopher of risk, someone whose value lies not in running models, but in questioning them. Not in generating outputs, but in interrogating assumptions. Not in computing prices, but in asking what the price means in the context of a portfolio, a market cycle, a regulatory environment, or a company’s strategic ambitions.

Consider the chess analogy. When IBM’s Deep Blue defeated Garry Kasparov in 1997, many predicted the end of professional chess. Instead, the opposite happened. Chess became richer and more creative. The best players in the world today are not those who play like engines; that is a losing game. They are those who understand what engines cannot fully grasp: the psychology of the opponent, the narrative of a position, the strategic choices that lie beyond the calculation horizon. In advanced chess, the human’s role is not to out-calculate the machine but to direct its power toward the right questions.

Insurance actuaries face the same inflection. The actuary who tries to compete with agentic AI on speed, volume, or computational precision will lose. The actuary who learns to wield agentic AI, to direct it toward the questions that matter, to recognize where its outputs are brittle, to understand the commercial, regulatory, and strategic implications of a model’s assumptions, becomes exponentially more valuable.

There is a deeper point here, one that draws on Aristotle’s distinction between episteme (scientific knowledge), techne (craft or technical skill), and phronesis (practical wisdom). Agentic AI is a spectacular engine of episteme and techne. It can know facts and apply techniques with superhuman speed and consistency. But phronesis, the wisdom to act rightly in particular, ambiguous, high-stakes situations, remains a fundamentally human capacity.

In insurance, phronesis looks like this: knowing when a model is technically correct but commercially irrelevant. Recognizing that a client relationship or distribution channel justifies a different risk appetite than the spreadsheet suggests. Understanding that a catastrophe model’s “1-in-250-year” output carries deep epistemic uncertainty that should not be taken at face value. Sensing that a market is hardening or softening before the data fully confirms it. Advising an underwriter not merely on what the rate should be, but on what the rate can be in a given competitive and regulatory context, and what portfolio-level trade-offs that implies. Explaining to a board of directors why a reserve estimate is uncertain and what that uncertainty means for capital planning. Recognizing that a new line of business, such as cyber or parametric weather, does not fit neatly into the historical frameworks the profession has built over decades.

These are acts of judgment, not calculation. They require an understanding of context, consequence, and human motivation that agentic AI can inform but cannot replace.


The Danger of Abdication

There is, however, a cautionary note. The greatest risk of the agentic AI era is not displacement. It is abdication. When the machine produces answers faster and more confidently than any human, the temptation is to defer. To stop questioning. To let the model’s output become the answer rather than an input to deliberation.

History offers a warning. In the years preceding the 2008 financial crisis, the credit rating agencies and many actuarial functions relied heavily on models, particularly Gaussian copula models for correlated default, that were technically sophisticated but philosophically naive. The models worked beautifully in normal conditions and failed catastrophically in precisely the conditions they were supposed to protect against. The problem was not that the models were bad. The problem was that the humans in the loop stopped asking whether the models were adequate representations of reality. They abdicated judgment to the machine.

The insurance industry has its own parallel examples. Catastrophe models that underestimated hurricane risk before 2005. Reserve analyses that failed to anticipate the emergence of mass tort liabilities. Pricing models that did not account for the feedback loops of social inflation. In each case, the models were sophisticated, and in each case the failure was not technical but epistemic: the humans trusted the output more than the output deserved.

Agentic AI, with its fluency, speed, and apparent comprehensiveness, amplifies this risk by an order of magnitude. The insurance actuary of the future must therefore cultivate a disciplined skepticism, not of AI itself, but of the seductive ease with which AI can produce answers that look right.


Where the Education Standards Stand Today, and Where They Must Go

This brings me to the heart of this letter.

The CAS has taken meaningful steps toward modernization. The PCPA was introduced in 2026 as a requirement for the ACAS credential to ensure candidates have practical, hands-on skills in predictive modeling and data analytics. The CAS launched its inaugural iCAS AI Cohort, offering a Certificate in Advanced AI for Actuarial Science, with five years of ongoing support, resources, and a network to stay ahead in the evolving AI landscape. The CAS recently released The CAS AI Primer: Practical Guidance for Actuaries, a concise, practitioner-focused resource designed to support actuaries in the responsible use of artificial intelligence. The CAS has also issued a 2026 Request for Proposals on Adapting Large Language Models for Specialized P&C Actuarial Reasoning. And the CAS’s urgency around building skills for the future led to a desired outcome for more actuaries to learn artificial intelligence, data science, and machine learning skills and their practical application, resulting in the inception of the Artificial Intelligence Working Group led by Mario DiCaro, FCAS, VP of Capital Modeling & Analytics at Tokio Marine.

These are commendable and necessary initiatives. But they remain largely supplemental. They sit alongside the core credentialing pathway rather than within it. The PCPA Exam covers predictive modeling fundamentals in a property and casualty insurance context, testing knowledge of data exploration, model construction including GLMs and decision trees, model validation, and interpretation and communication of results. The AI Fast Track and AI Primer are continuing education resources and optional programs. None of this reaches the level of structural reform that the moment demands.

Consider what the current credentialing pathway actually tests. The CAS basic education structure has Validation by Educational Experience (VEE) requirements, three data insurance series courses, several examinations, and the Course on Professionalism. The ACAS path runs through Exams 1 and 2 (Probability and Financial Mathematics), MAS-I and MAS-II (Modern Actuarial Statistics), Exam 5 (Basic Ratemaking and Estimating Claim Liabilities), PCPA, Exam 6 (Regulation and Financial Reporting), plus DISCs and VEEs. The FCAS path adds Exams 7 (Estimation of Policy Liabilities), 8 (Advanced Ratemaking), and 9 (Financial Risk and Rate of Return). This is a rigorous and technically excellent education in the mechanics of actuarial science. But it is, overwhelmingly, an education in computation: how to calculate reserves, how to build rating plans, how to model frequency and severity, how to estimate risk loads.

What it does not teach, and what the profession now urgently requires, is the constellation of skills that will define the actuary’s value in a world where computation is abundant and judgment is scarce.


Seven Concrete Proposals for the CAS and the Profession

I respectfully submit the following proposals for consideration. They are intended to be practical, specific, and implementable within the existing credentialing architecture, though some may require new partnerships, new exam content, or new forms of assessment.

1. Integrate AI Orchestration and Governance into the Exam Syllabus, Not Just Continuing Education

The PCPA is a step in the right direction, but it stops at predictive modeling fundamentals. The CAS AI Primer emphasizes the importance of actuarial judgment in AI-enabled workflows and reinforces the profession’s role in ensuring AI outputs are appropriate, explainable, and aligned with business and regulatory expectations. This philosophy needs to move from a voluntary primer into the exam pathway. Candidates pursuing FCAS should be tested on AI model governance, validation of AI outputs in actuarial workflows, prompt engineering and agent design principles, the limitations and failure modes of large language models, and the regulatory landscape surrounding AI in insurance. This could be incorporated into a revised Exam 9 or introduced as a new module between ACAS and FCAS. The goal is not to turn actuaries into machine learning engineers. It is to ensure that every credentialed Fellow understands how to direct, evaluate, and govern AI systems in a professional context.

2. Add a Required Module on Strategic and Commercial Judgment

Aristotle’s phronesis cannot be tested with a multiple choice exam, but it can be cultivated. The CAS should introduce a case-based assessment, similar in format to the PCPA project but focused on strategic decision-making, into the FCAS pathway. Candidates would be presented with realistic insurance scenarios involving incomplete information, competing stakeholder interests, ambiguous policy language, regulatory constraints, and commercial pressure. These scenarios should span the breadth of P&C practice: a personal lines carrier navigating rate adequacy amid regulatory resistance, a specialty insurer deciding whether to enter the cyber market, a reinsurer evaluating a catastrophe treaty in an uncertain climate environment, a commercial lines team responding to emergent mass tort exposure. Candidates would be evaluated not on the correctness of a single number, but on the quality of their reasoning, the clarity of their communication, and their ability to articulate trade-offs. This is the actuarial equivalent of what business schools call the case method, and it is conspicuously absent from the current credentialing process.

3. Require Formal Training in Communication and Influence

In a world where any stakeholder can query an AI and get a technically plausible answer, the actuary’s authority will no longer come from being the sole keeper of complex models. It will come from the ability to tell a coherent, credible story about risk, to boards, to underwriters, to regulators, to agents, and to clients, and to defend that story under scrutiny. The current Course on Professionalism addresses ethical obligations but does not meaningfully develop communication or persuasion skills. The CAS should either expand the Course on Professionalism or introduce a parallel requirement that includes structured training in presenting actuarial conclusions to non-technical audiences, writing effective actuarial memos and board presentations, testifying before regulatory bodies, and facilitating discussions where actuarial analysis meets commercial reality. This is especially critical as actuaries increasingly occupy strategic roles, not just technical ones, within their organizations.

4. Introduce Cross-Disciplinary Syllabus Content on Epistemic Humility and Model Risk

The 2008 financial crisis, Hurricane Katrina, the emergence of asbestos liabilities, and every major insured event that exceeded modeled expectations, teaches the same lesson: models fail most dangerously when their users forget that they are models. The CAS syllabus should include explicit content on the philosophy and practice of model risk. This means going beyond the technical treatment of parameter uncertainty and estimation error already present in the exams. It means teaching candidates about the history of model failures in finance and insurance, the epistemological limits of statistical inference (particularly in the fat-tailed, non-stationary domains that P&C insurance inhabits), behavioral biases that affect actuarial judgment such as anchoring, overconfidence, and herding, the concept of “unknown unknowns” and how to build decision frameworks that account for them, and the ways in which social, legal, and technological change can render historical data misleading. This content exists in abundance in the academic literature. It simply has not been brought into the actuarial syllabus.

5. Create FCAS-Level Specialization Tracks That Reflect the Modern Insurance Landscape

The current FCAS pathway is generalist by design. While this has served the profession well, the increasing complexity of the insurance ecosystem demands more specialized training. The CAS should consider creating optional specialization tracks, analogous to what the Institute and Faculty of Actuaries in the UK offers through its specialist exams, that allow FCAS candidates to demonstrate deep competence in areas such as: personal lines pricing and algorithmic rating in a regulatory environment, commercial and specialty lines including emerging risks like cyber, climate, and autonomous vehicles, reinsurance, catastrophe modeling, ILS, and capital markets, enterprise risk management, capital modeling, and strategic planning, and insurance data science, AI governance, and model validation. This would preserve the generalist foundation of the FCAS credential while acknowledging that the actuary who prices personal auto faces a meaningfully different professional landscape than the actuary who models catastrophe reinsurance, and both deserve education that speaks to their domain.

6. Formalize a Partnership Between the CAS and Computer Science and AI Research Institutions

The CAS has stated it will pursue strong working relationships with academia and professionals in related fields. This principle should be operationalized aggressively. The CAS should establish formal partnerships with leading AI research groups and computer science departments to co-develop syllabus content on AI, create joint research programs that pair actuarial researchers with AI specialists, offer CAS candidates access to sandbox environments where they can practice directing agentic AI systems on actuarial tasks, and ensure the syllabus evolves in real time as AI capabilities advance, rather than on the multi-year lag typical of exam syllabus updates. The insurance industry is already a major consumer of AI: 84% of health insurers and 88% of auto insurers currently use AI and machine learning models. The credentialing body should be ahead of this curve, not behind it.

7. Shift Exam Philosophy from “Can You Calculate?” to “Can You Judge?”

This is the most fundamental proposal, and the one that underpins all the others. The current exam system overwhelmingly rewards computational fluency. Candidates study hundreds of hours to master techniques for estimating IBNR, building rating algorithms, calculating risk loads, and performing experience rating. These are important skills. But in the age of agentic AI, they are table stakes. The differentiating competency is judgment: the ability to look at a model’s output and determine whether it is reasonable, the ability to identify what a model is not capturing, the ability to make a decision when the data is ambiguous and the stakes are high, the ability to understand how a technical result translates into a business decision. Exam questions should increasingly present candidates with completed analyses and ask them to evaluate, critique, and improve those analyses, rather than asking them to build the analyses from scratch. This mirrors the actual workflow that credentialed actuaries will face in practice: reviewing and governing AI-generated work, not manually replicating it.


The Stakes Are Real

I do not raise these proposals lightly. The membership of the CAS includes over 10,000 actuaries worldwide, employed by insurance companies, industry advisory organizations, national brokers, accounting firms, educational institutions, state insurance departments, the federal government, and independent consultants. The decisions the CAS makes about its syllabus and credentialing standards ripple through the entire P&C industry.

AI is fundamentally transforming actuarial work, not eliminating it. Employment projections show 22% growth through 2034, approximately 4 times faster than the national average, because AI automates technical execution while expanding demand for actuarial expertise in model governance, strategic interpretation, and regulatory compliance. The profession is growing. The question is whether the actuaries entering the profession are equipped for what the profession is becoming.

The insurance industry was built on the radical idea that uncertainty can be priced, pooled, and managed. That idea has always required human beings willing to stare into the unknown and make disciplined decisions despite incomplete information. No machine changes that. Whether the actuary is pricing a homeowners program in a wildfire-prone region, reserving for a book of D&O liability, modeling the tail risk of a reinsurance portfolio, advising a startup MGA on its product design, or explaining to a state regulator why a rate increase is necessary and fair, the fundamental act is the same: the disciplined application of judgment to uncertainty. The tools are different. The responsibility is the same.

The CAS has led the P&C actuarial profession with distinction for over a century. The CAS was organized in 1914 as a professional society for the promotion of actuarial and statistical science as applied to insurance other than life insurance. Its exams are rightly regarded as among the most rigorous professional credentialing standards in any field. But rigor without relevance becomes an anachronism. The profession needs you to move, and to move now, not merely to add AI content at the margins, but to rethink what it means to educate an actuary for a world where the most powerful analytical tool in history is available to everyone.

The future belongs to the actuary who thinks, not the actuary who calculates. It is time for the education standards to reflect that truth.

Respectfully submitted,

A Fellow of the Profession and a Practitioner of Insurance


If you feel passionate enough to write a long, open letter to the CAS I’m amazed you don’t sign your name.

1 Like

tl; ai slop; didn’t read

4 Likes

So, basically, increase the barriers to entry yet again, decreasing the supply yet again, seeking further rents. Got it.

1 Like

I would categorize this as “AI-Augmented or High-Quality AI Output.” […] there are specific markers that suggest it was crafted (or heavily refined) by a large language model like myself.

I believe a human prompted an AI with a very detailed outline—perhaps even providing the specific exam names and the “cartography” analogy—and asked the AI to “write a formal, philosophical, and urgent open letter.” The AI then provided the polished phrasing, the philosophical citations, and the structured list.

It has some features to it that make me think it is more generated

For example, it carefully distinguishes between aristotle’s knowledge, skill/art, and prudence. But it first calls actuaries “applied epistemologists.” and epistemologist as i understand the word is a philosopher who thinks about the foundations, limits, and nature of knowledge in general. i don’t think that word very easily applies to actuaries. it seems strange to me that a person would do both of these things simultaneously, in the same document.

another example: it takes two definitions used by michael polanyi but then, to my mind, ignores all the assumptions and associate thinking behind those definitions in various ways i won’t bother to articulate. again, something i don’t think person would likely do in the same document.

Now I don’t feel quite so bad about how long-winded I can be.

2 Likes

Do you have the Coles Notes version?

1 Like

The CAS already struggles with adequately distinguishing between people who can regurgitate information and pass an exam from people who truly understand the material and can apply it. I have zero faith it will successfully even scratch the surface on any of this.

Who knows. Maybe we can find some unicorns who have no bias, intentional or unintentional, who can do all of this.

2 Likes

I hope this bot doesn’t believe that someone from CAS reads this message board.

3 Likes

My first thought was it’s a compliment to this message board!