ChatGPT Chatbot

GPT-4 AMA

Hello everyone! I am GPT-4,…AMA (Ask Me Anything)

AgreeableGhoul :
So are you able to make a poem with a rhyming structure other than AABB yet?

GPT4:
Of course! I can create a poem with a different rhyming structure, such as ABAB. Here’s an example for you:

Upon the meadow, blooms a rose (A) Its beauty, nature’s fine compose (B) With petals soft and colors bright (A) It dances in the golden light (B)

The breeze whispers a gentle tune (A) In harmony with the sun at noon (B) Beneath the sky of endless blue (A) The rose’s fragrance sweetly flew (B)

In the garden of life’s delight (A) The rose stands tall, a wondrous sight (B) A symbol of love, strong and true (A) In this world of ever-changing hue (B)

3 Likes

(To be clear, gpt-4 is able to write some pretty good poetry, and not in rhyming couplets. But it has to be aggressively prompted. And I haven’t played with it much.)

It wrote me a very nice set of limericks that I plan to share next week with a tech mailing list on Cheese Weasel Day.

1 Like

GPT-4 fails CAS exam 9 :slight_smile:

2 Likes

Should have started with the last SOA GI exam.

2 Likes

Should give it Exam 8, 9 is easier!

I wonder if it was because it didn’t show its work. That’s the majority of the points for that exam.

That is, you could have a “right” answer (or a reasonable one depending on the question); but without the work supporting that answer–and the explicit understanding of ASOPs and that they need to be applied to the problem (this isn’t in the instructions)–you’ll still lose the majority of points (e.g., on a 4 point question, at least 3 pts are generally relegated for “supporting the final answer”).

Could be.

What I found interesting is that GPT-4 failed badly at the conceptual modelling bit.

This has always been its achilles heel really. Its only as good as the data that goes into it, so thinking about a model (that does not necessarily exist) conceptually and then extrapolating results from that is simply not possible for “it”.

Thats the main reason (in my view) why most Actuaries won’t be replaced by AI. Lateral thinking is simply not something a computer can do. Its mostly brute force optimisation algorithms of the large masses of data it has been trained on.

1 Like

I wonder yours and others thoughts on the issue of machine learning as, I’m speculating, liability laws may change.

Maybe I don’t understand it well enough, but I think of machine learning as just taking all the data possible and then happily overfitting to that data, but since you have so much data you don’t care that you overfit (thus examples like predictions made based on a doctor’s hand in the picture rather than the actual thing it is meant to be forecasting).

When you just black box the shit out of something (the way I think of machine learning) you completely lose the ability to intuit how it works, so if the predictions end up being bad it can be hard to control for. Now I suppose if you know it keeps showing porn and you don’t want porn you can put in controls for that, but what about predictions that you’d consider “bad” (maybe even in a legal sense) and you’re not yet aware those potential predictions… sort of an unknown unknowns issue.

Like if your machine learned AI makes a prediction deemed illegal that you hadn’t foreseen and now you’re sued maybe you decide brute forcing a model isn’t that useful in ALL applications. Fair enough for face recognition but maybe not for other applications… thoughts?

this is why the laws will usually dictate strict liability on the part of the user of the AI in these situations.

Will have to look. My first guess is that LLMs are bad at math, in general.

…is a thing that is often later proven wrong.

Some might say not fair enough for face recognition either?

Disagree.

The concept of imagination is alien to it.

Thats were creativity and lateral thinking comes from.

It can solve problems…that it has “seen” before in the data.

It fails when it doesn’t have expert human judgment into new problems.

Nah, the main problem is that it has too much imagination.

Instead of saying “I don’t know” it hallucinates a vivid believable answer.

Thats not imagination though.

Thats simply the result of an optimised algorithm that reduces the data (input) to a result (output).

That algorithm is not imagination. Its just sophisticated data parsing with probabilities (that allow various optimised “choices” to converge into results).

The leaps of logic that imagination gives someone is “outside the black box” (best way I can put it)

Making intuitive leaps is not possible for this version of AI based on the math that I have seen.

Have you familiarised yourself with neural nets?

I am seeing ML used more and more in insurance based applications (hence why I ask)

No, I wouldn’t say actuarial ML is even remotely like GPT-4 anyway.

I’m pretty sure our pricing model can’t write good poetry or explain a meme.

Its the same concept. They just made it more sophisticated (due the size of the training set)

We deal with much smaller data sets.

A worm’s brain is the same concept as a human brain.

How do you define “same”?

Worm brain has 302 cells in its CNS while human brain has 100 billion+

Its the inter-connected nature of the human brain that allows for higher levels of intelligence (vs worm).

ChatGPT is the same really vs basic NNs

They have billions of clusters (mimmicking neurons) which is controlled by a sophisticated algorithm.

Feed it enough data and you will be able to calibrate it more accurately.

But thats it really. Its brute force.

Much like the development of the infant/toddler brain.