I have been one of the first beta testers at my company of AI (Microsoft Copilot) and I would say it depends on what actuarial area you are in.
If you are in an area that is very repetitive and crank the handle type work (think valuation & base reporting) then its likely that it could affect you.
If you are in a more subjective area (capital, modelling, etc) then its highly unlikely.
AI is great for summarising things (documents and meetings), creating slides (for meetings), but for the more conceptual stuff (when you are designing models or looking at capital impacts) it fails miserably.
So effectively, its making my life a bit easier because I spend less time doing low value add work (creating a slide deck, looking over meeting notes etc) and more time doing higher value add work (validation and model development).
In the aggregate sense, I am becoming a bit more productive which is what I see happening with most other Actuaries.
1 Like
One thing is actuaries use a lot of excel, and (all other LLM issues aside), they canāt navigate sheets well. The people I see getting the most out of it now are mostly programmers..
I think the most obvious medium term āactuarialā use is when the boss wants to know āwhy is business XXX losing money?ā and ideally that answer is in a report somewhere, but you might also spend a day writing 50 SQL queries, and I suspect a good LLM could do that, given the right training data and prompt.
1 Like
There were solutions in the market that had associative databases (what people used to call these pattern recognition things before it became called AI). I know many medical plans outside the Americas used to use solutions made in qlikview. I donāt know the North American products but there must be stuff for example from Milliman around this thing for ages now.
You will probably see these things if you sit through the clinical report from a medical plan as they try to manage their claims experience.
ETA: the first PhD on computer vision is considered to have been done in 1963. What has really changed over time is the affordability of computer power which has made it easier to get pictures used for pattern research. Areas like astrophysics have used this as they assess different pictures from telescopes on clusters for a long time now.
Donāt get me started on super nurses replacing family doctors⦠dear lord. We already have problems with family doctors sending patients to specialists for no reason.
It comes down to how you think learning is done.
Suppose learning is a mechanical process in which logical āsentencesā are applied to primitive observations.
Philosophers investigated this picture of learning in the early 20th century under the label ālogical positivismā or ālogical empiricismā. It was rejected.
Never mind that, though. Imagine if thatās how thinking works. Then computers will be really good at it!
Mathematics is one area where this model of thinking seems to work better. Mathematicians really do start with complete definitions and then apply logic to them.
Ever since Descartes, mathematicians have been tempted to turn all thinking into math. Logical positivism was one example. I interpret this as the latest attempt. I donāt think itās a coincidence that itās primarily mathematicians driving this latest round of āAIā technology, since what was needed was adapting existing models to scale to large data sets.
Iām not totally against AI. I think it is very effective for certain tasks. Like say you wanted to generate a website based on a description of what it should be used for. Errors in this case do not lead to any serious damage, you can literally just fiddle around with it until you get what you need.
Iām not sure I agree with the last paragraph. In my view, mathematical reasoning is pretty orthogonal to pattern recognition. My physics background is really poor but I would say that a lot of mathematics was developed to understand highly complex dynamical systems for physics and other natural sciences . AI based methods are really the domain of computer science. Brute force optimizations with high number of variables, not clever at all. Itās really barbaric stuff. I think there is a sense that physics and science has pretty much had a hard time in recent decades and people are starting to explore more computational approaches instead of relying on geniuses trying to come up with fancier and fancier models. Iām not sure what to think of this. Itās no secret that physics research hasnāt been very fruitful.
Sure, i could maybe direct an AI to do the work of an intern with enough prompting. One issue i heard with local AI implementations so far is the advanced learning where the AI can learn from the internal sorts of analysis is turned off for security or privacy reasons. Potentially solvable, but if the central servers running the AI need access to company data to keep getting smarter, and that is turned off, all you end up with is 50 first days with the summer intern.
Thatās a good point. I think you are right on this, although mathematical reasoning involve a lot of pattern recognition in practice (think recognizing which integral rule to use).
However, that pattern recognition is being equated with thinking. The claim is that ChatGPT is more than a stochastic word predictor. It is nearly the last step in developing artificial general intelligence.
For that to be true, intelligence has to be a set of logical steps applied to data, as I mentioned before. I suppose the ChatGPT neural net encodes those logical steps, in this view.
I think this is what these AI companies want us to think.
Itās not at all clear to me how these new models will help science in the role of anything more than a fancier Mathematica/computer algebra system.
We do have a historically important example of using brute force computation before the more fundamental laws could be figured out. Itās Ptolemyās model of the solar system.
2 Likes
Learning to me is the ability to take small amount of data and then from that you are able to āpredictā or āsolveā problems. In this way, mathematicians are very intelligent because they start with axioms (small data) and create an entire universe of definitions, propositions etc. Thatās why I made the remark that a well trained radiologist doesnāt need to train on billions of scans because he has actual medical training. Thatās the essence of intelligence.
The problem is the human body isnāt axiomatic. Letās use body temperature ( not using body weight and height (BMI) but I could). It depends on what part of the body you measured, the body size, age, current stage in development (e.g. puberty or menopause) and a number of other factors. These lead to a lot of biological measures needing to be judged on a spectrum which is what we call a medical specialistās expertise. Assessing on these measures on the curve as well as including the unique factors of the patient (e.g. having cancer vs ebola) is what leads to a lot of the diagnostic issues. The problem becomes multidimensional which makes the problem harder for the human mind. When that happens humans use hueristics.To me main issue is initially programs eliminate human hueristics that are used when the problem becomes too multi-dimensional.
In places where the population is very large the number of edge cases becomes large too. This happens gradually as the population increases so again it looks like medical specialists are not so good.
Not axioms but there are definitely established āmechanicalā truths about anatomy that definitely help in a medical setting. This is undeniable. Otherwise an experienced nurse could be a doctor.
https://www.reddit.com/r/medicine/comments/7yjopt/does_reasoning_from_first_principles_in_medicine/
Depends on how fundamental you consider your first principles to be. In many instances you can use an understanding of basic physiology and anatomy to understand the clinical picture and drive treatment decisions. Localizing lesions in Neurology depends on an understanding of neuroanatomy, heart failure management relies on an understanding of cardiovascular physiology, etc. Medicine requires the application of these principles all the time
Eh, reddit no good.
Not interested in an amateurās opinion, which is what reddit consists of.
Iām puzzled as to why so much actuarial work happens in spreadsheets rather than through statistical programming languages. Seems like that would give a bigger productivity boost.
Easier to share the work in spreadsheets around the company.
Its basically the lowest common denominator for everybody.
If you start sharing R or Phython programs, the majority will not be able to understand (and use it).
Why do athletes spend thousands of hours taking shots or throwing balls or running sprints?
Why do students spend hours and hours solving math problems?
For a lot of things, the structured repetition with feedback is what leads them to being good at something.
Itās why weāre able to train technicians to operate/read diagnostic imaging without going to medical school. Theyāre doing pattern recognition.
In this thread, you posted an example of the radiologists having to look at thousands of scans to get to the point of some scans being the easy ones. So, basically, throw thousands of images at them until they get good at diagnosing them.
If you want to train a good AI, you need to feed it a ton of scans, just like trainee radiologists get fed them. They need to see lots of positive, negative, and indeterminate scans. The powers that be in radiology have the tools available to them to figure out when the AI is performing acceptably well that it can be operationalized. I donāt see how this is any different from training optometrists how to properly use an automatic refractometer (the machine that shows you a picture of a barn and spits out a prescription for you) in their practice.
As to why AIs need so many more scans than trainee radiologists, the trained radiologists have training that tells them what it is they need to look for. The AI is coming in knowing nothing, but has some tools available to it to be able to figure out how to differentiate between images with different states.
It seems like madness to me to share the raw spreadsheet around the office with people who arenāt going to be able to interpret much of it. Iād think it would make more sense to share a dashboard that lets them play around with things.
This is maybe part of it.
But I think spreadsheets have real advantages for some situations that statistical programming languages do not.
If your main concern is hypothesis testing then you want an axiomatic, systematic approach that is as objective as possible. This is because reproducibility is your main concern. This is the situation for which statistical programming environments are designed.
However, actuarial analyses are often used to make decisions that require a lot of judgment. In this case, for various reasons the spreadsheet can make it easier to apply that judgement, based on the needs (āutility functionā) of the company.
1 Like
Myself and my co-workers are using it for fisheries population modeling and doing projections under different scenarios. Lots and lots of judgement decisions and what ifs. R and Python are both great options for doing this sort of work.
Iād think the people who are making the decisions are mostly looking at the summary tables and probably some figures and maybe your model diagnostics, but for the most part they donāt need to see the individual values for column G of the spreadsheet which shows some intermediate step. Itās the final values and scenario outcomes that are important I suspect.