AgreeableGhoul :
So are you able to make a poem with a rhyming structure other than AABB yet?
GPT4:
Of course! I can create a poem with a different rhyming structure, such as ABAB. Here’s an example for you:
Upon the meadow, blooms a rose (A) Its beauty, nature’s fine compose (B) With petals soft and colors bright (A) It dances in the golden light (B)
The breeze whispers a gentle tune (A) In harmony with the sun at noon (B) Beneath the sky of endless blue (A) The rose’s fragrance sweetly flew (B)
In the garden of life’s delight (A) The rose stands tall, a wondrous sight (B) A symbol of love, strong and true (A) In this world of ever-changing hue (B)
(To be clear, gpt-4 is able to write some pretty good poetry, and not in rhyming couplets. But it has to be aggressively prompted. And I haven’t played with it much.)
I wonder if it was because it didn’t show its work. That’s the majority of the points for that exam.
That is, you could have a “right” answer (or a reasonable one depending on the question); but without the work supporting that answer–and the explicit understanding of ASOPs and that they need to be applied to the problem (this isn’t in the instructions)–you’ll still lose the majority of points (e.g., on a 4 point question, at least 3 pts are generally relegated for “supporting the final answer”).
What I found interesting is that GPT-4 failed badly at the conceptual modelling bit.
This has always been its achilles heel really. Its only as good as the data that goes into it, so thinking about a model (that does not necessarily exist) conceptually and then extrapolating results from that is simply not possible for “it”.
Thats the main reason (in my view) why most Actuaries won’t be replaced by AI. Lateral thinking is simply not something a computer can do. Its mostly brute force optimisation algorithms of the large masses of data it has been trained on.
I wonder yours and others thoughts on the issue of machine learning as, I’m speculating, liability laws may change.
Maybe I don’t understand it well enough, but I think of machine learning as just taking all the data possible and then happily overfitting to that data, but since you have so much data you don’t care that you overfit (thus examples like predictions made based on a doctor’s hand in the picture rather than the actual thing it is meant to be forecasting).
When you just black box the shit out of something (the way I think of machine learning) you completely lose the ability to intuit how it works, so if the predictions end up being bad it can be hard to control for. Now I suppose if you know it keeps showing porn and you don’t want porn you can put in controls for that, but what about predictions that you’d consider “bad” (maybe even in a legal sense) and you’re not yet aware those potential predictions… sort of an unknown unknowns issue.
Like if your machine learned AI makes a prediction deemed illegal that you hadn’t foreseen and now you’re sued maybe you decide brute forcing a model isn’t that useful in ALL applications. Fair enough for face recognition but maybe not for other applications… thoughts?
Thats simply the result of an optimised algorithm that reduces the data (input) to a result (output).
That algorithm is not imagination. Its just sophisticated data parsing with probabilities (that allow various optimised “choices” to converge into results).
The leaps of logic that imagination gives someone is “outside the black box” (best way I can put it)
Making intuitive leaps is not possible for this version of AI based on the math that I have seen.
Have you familiarised yourself with neural nets?
I am seeing ML used more and more in insurance based applications (hence why I ask)