In my experience with the kind of work that I do, it is often the case that the analysts are the ones who are applying the judgement, and it is helpful to be able to see the “guts” of the analysis. Part of the issue may be that there is not enough data to make the use of a unifying statistical framework, and then putting judgement into that framework, worth it.
I have also worked with R and python, and experienced the sorts of situations you are talking about, and agree that in those situations a statistical programming language is better.
For a lot of things, the structured repetition with feedback is what leads them to being good at something.
Are you familiar with the concept of rate of convergence? Infinity is not 1 billion or 1 trillion, it’s big, very big. Some people don’t have what we call “talent” and can practice a sport forever and never become good at it. Just like a nurse will never replace a doctor even though they’ve seen it all.
Intelligence is precisely not needing a lot of repetition and training to learn how to do something. Here’s a funny story about John Daly and Tiger Woods:
Sure. If you are meeting with the investment or pricing committee (or whatever board applies to you) they won’t care about the nuts and bolts. They just want the results and summaries.
But before it gets to that point, you will need to reach a consensus and many assumptions in the models may be challenged (this is specially true when a lot of $$$ or £££ is involved)
Thats when you will need to share the nuts and bolts. Most important models and design considerations have to go through many checks before it all gets put in front of
the CFO to make a decision.
Its just easier to do this in excel (you can lock down the spreadsheet make it read only if need be), specially when you also have to deal non-actuarial stakeholders as well (corporate development, risk etc).
I see a lot of these sorts of classification schemes and think they can be helpful.
However, they are prescriptive, not scientifically descriptive, kind of like most, if not all, personality tests. In other words, if you are trying to organize your thoughts, then any kind of scaffold can help.
But, i’m not sure that things really natural group that way.
In particular, that doesn’t really describe the underlying technology at all.
I think some of what is happening is that some academics may be trying to “retconn” an entire set of relationships for funding and legitimacy purposes.
For example, i’ve seen the claim that machine learning is a “subfield” of “AI”. I wonder if that isn’t kind of like saying biology is a subfield of physics. Sure, biological systems obey physics laws, so you can set up a hierarchy in which biology is “below” physics. But that tells you absolutely nothing about actual biological research in relation to physics, past or present. It also doesn’t really tell you anything about biological systems besides a very broad statement that is probably as much philosophical as scientific.
We are a weird mix of statistician, accountant, and specialized business analysts. I’d agree with poly that part of it is that you can’t count on everyone to have the technical skills. And with mag that a lot of it is judgment.
I’d also add that the money side of things lends itself to tabular formats-- often the data adds up to a ‘balance sheet’ with sums of enrollment, revenue, claims, admin, being added and subtracted by various categories and subcategories… and then there’s lots of finicky bits like fees, adjustments, rebates, exclusions, taxes, allocations, factors, etc. You can create a dashboard-- like you say-- but given a dashboard we’d all want to see the numbers underneath.
I would agree at a certain scale-- when we have tables with 100k or 1M rows, we really ought to understand the statistics, beyond averages and trends. We should have distributions, and regressions, and time-series, and predictive models and all that jazz. And we do all of that sometimes (Often in Excel). But it’s kind of just not our main gig.
A nurse won’t become a doctor because they don’t have the training/licensing. There’s nothing magical about becoming a medical doctor. If the nurse finds they like the work a doctor does, it’s possible for them to go do the nurse practitioner training or getting an actual MD. The biggest constraints are time and money.
You seem to be conflating becoming good with becoming the world’s best. Sure, I can throw a baseball thousands and thousands of times in a structured training program, and never become a good enough pitcher to play in MLB. Odds are quite likely, that I’d be good enough to play at a higher level of baseball than a person who never practices at all. Similarly, the radiologist trainee that reads thousands and thousands of scans is going to be better at it than most other people who don’t read scans. The only reason the radiology trainee is a radiology trainee is because they got the radiology internship rather than internal medicine or orthopedics internships. Not because they’re an innate expert at reading scans.
We have those meetings too. Normally, we’re looking at a PowerPoint presentation that’s going over the main model equations and error structures and possibly some figures showing the shapes of the relationships and why we chose that structure. There isn’t a lot of need to see the code itself.
artificial intelligence
Information technology that performs tasks that would ordinarily require biological brainpower to accomplish, such as making sense of spoken language, learning behaviours or solving problems.
By that definition, linear regression is AI. At this point I think AI is a marketing term more than a clearcut field with distinct sub-areas.
Mostly agree with this. They are throwing the AI moniquer at many things just to attract more money.
Remains to be seen what actually remains when the tide of public and private money recedes.
Not sure about US/Canada, but we are now under some marked cost pressures because of taxes, so they are really driving the AI reduces operational costs thing.
I think the direct applications of NNs are sort of like an ‘intuition’, which I would agree is complementary to logic. Though they both use a lot of compute.
If you compare “Deep Blue” to “Alpha Zero”, you can see it. Deep blue was thousands of hard-coded chess knowledge, and was able to win by brute force, applying every rule to every possibility.
“Alpha-Zero” had no coded strategy, but was able to memorize what a good position “looks like” (pattern recognition). Similar to a human it looks at a lot fewer moves, but is able to make up for it by focusing on the best moves. So in a way it is less brute force, more finesse. It’s also more flexible, able to do things like sacrifice a piece for a vaguely defined edge.
I think? the same is true with complex dynamical systems, like weather forecasting. GraphCast recently outplayed the best physics based model. It required a lot of work to train, but actually a lot less computation to run, since it wasn’t trying to calculate tons of differential equations on zillions of points.
And yes, it’s “barbaric”. I mean, some poor genius mathematician had to write 400,000 lines of beautiful code applying all of physics to a chaotic system, and some asshat from google comes along and says:
That said… I think LLMs are weird in part because they are in the domain of language. They can potentially use pattern recognition to pick up on what logic looks like. Ie. Modus Ponens, etc.
And if they are able to do that well enough, maybe they can just fake being logical well enough to be logical. You see this when they can write or debug a novel computer program. Obviously they’re not great at it, but they are arguably remarkable.
Watching LLMs “think” also makes me considerably more suspicious of what humans do when we “think”. No doubt we have some special brain functions, but I wonder how often our human reason is a word salad machine that imitates the meanings of words like “truth”.
Just get the people in the service departments reasonable computer systems so they can do their jobs properly. Difficulty is it’s costly and may be a pain to transition.
it’s hitch-hiking with the concept of “intelligence”, which has enormous cultural importance, but no real contemporary consensus on its meaning, either philosophical or scientific.
My larger worry is how it will start to influence what we think intelligence is. This has certainly happened with the IQ test.