There is a move to get Musk to do something about AI images on Twitter using images generated by Grok:
Spoilers so images don’t ruin the thread:
There is a move to get Musk to do something about AI images on Twitter using images generated by Grok:
Spoilers so images don’t ruin the thread:
I can’t tell if the author thinks that is good or bad.
Is it not a pot shot because Musk was calling for AI controls fairly recently?
Yikes
To me this is what will constrain AI. In insurance we can’t so easily machine learn (read: overfit) a model with no intuition because of regulatory constraint and insurer ownership over the implications of the model output (i.e. need to be able to explain how it works). I think at some point we’ll figure out the AI firms are liable for the output and they’ll suddenly decide they care about controls. Until then, free reign.
I also think they don’t understand latent risk yet. Something as perverse as AI can lead to too many issues that they are not aware of yet.
Elon won’t care until it starts giving Elon’s assassination coordinates in real-time.
Apropos of nothing, does AI know the difference between Regulation XXX and porn?
Regulation XXX vs. Porn: A Clear Distinction
There seems to be a misunderstanding.
If you could provide more context or information about where you encountered the term “Regulation XXX,” I might be able to give you a more accurate answer.
Here are some possibilities based on common misconceptions:
Gemini failed.
or his private jet coordinates
I’m waiting for the right confluence of events for ChatGPT/Grok/etc. to spit out something like, “Yes, it is considered safe to drink bleach if you first heat it to 150 degrees” and somebody to die.
“Next, on ‘Fear Factor’…”
Seems they did CYA pretty good. So I wouldn’t consider it a full on fail. Give it more specificity on Regulation XXX so it can go try to find it in the NAIC or AAA literature. But you would probably need to be more specific that to refer it to AAA literature as well.
In other words, if you know the answer to the question you are asking, you can be specific enough in your question for the AI to generate a useful response.
I’ve played with Gemini and chatgpt, and it’s clear that neither has been trained on actuarial material. They could be, of course. But it hasn’t happened yet.
The more i see of all of it, the less intelligent it seems. It’s just entering the uncanny valley of intelligence rather than leaving it.
“Please yell into this vacuum, where no one can hear you, especially me…”