Yes. Well. Sort of. In the example of a game, there can’t be any gaps in the model’s output, or the code won’t execute, the game will crash, fill with bugs, and have no features. The pieces of code need to fit. Each piece needs to be selected and modified with consideration for how it works with the whole. Same with debugging, no doubt the AI has seen a similar error a million times, but does it know how to fix the error in your code?
But yes, if you talk to it about your family dinner, and it talks about the food, picks up on your sarcasm, and jokes about the irony of your mom lying, etc. then that’s all 100% illusion.
But how thorough is the illusion? Is it more like the illusion of astrology, where it says some completely random but fitting answers and you lick it up? Or is it closer to how it writes code, where it uses the relationship between words to navigate the relationship between ideas?
Maybe sometimes it’s very flimsy and sometimes it’s relatively solid.
Also relevant, I’m going through the SOA stuff on AI. Some of it’s a bit fluffy, but they’ve got at least one document on LLM’s that’s a pretty good guide to implementation.
They’ve also got a document/study on AI in insurance companies in China. I was surprised to learn that the biggest implementation is for internal access to information (as opposed to an external chatbot).
Anyway, if any of you nerds are looking at chatbots at work, lemme know. I’m currently starting to develop processes for this with the intention of offering build-a-bot service to the insurance industry.
There are at least two ideas behind the meaning of words that i’m aware of. (i’m not a good enough philosopher to give details.)
words represent objective meanings.
words have meanings only relative to the intent of the author and the experience of both author and reader
statement (1) holds much more true for an algorithm so it’s harder to imagine coherence being an illusion imposed by us. we can still “test” the algorithm or recipe in the environment we think is appropriate and objectively measure whether there is coherence.
for many other kinds of communications, the intent of the author really does seem needed and will always be missing.
i think there may be a postmodern approach to meaning in which the intent of the author doesn’t matter. it is only the experience of the reader. in that cause i suppose the coherence could be real. however i’m really getting far out over my skis in that one. and i don’t find that position about meaning to be convincing.
Sometime in the future, probably, chat bots will be able to seamlessly pass any CAPTCHA and even present some convincingly human like dialogue on any website. Won’t that future be great.
Honestly, a few weeks ago, I was wondering if I should create a spam-bot using an agentic AI and a VPN service, to generate thousands of fake accounts and personalities, with the singular purpose of fact checking the Fox News comment section.
In my mind they would be a lot more subtle than what is currently wrecking goactuary.
“So if you go talk to ChatGPT about your most sensitive stuff and then there’s a lawsuit or whatever, we could be required to produce that. And I think that’s very screwed up,” Altman said. “I think we should have the same concept of privacy for your conversations with AI that we do with a therapist or whatever. And no one had to think about that even a year ago.”
So patient confidentiality is designed to protect the patient, right? If a patient, or related party is suing for malpractice, does a therapist get to say, sorry, confidential?
I think Altman is talking about third-party lawsuits. Right now the NYT is suing OpenAI for copyright infringement, and requesting that the court review all their chat logs, especially chat logs that users ask to delete, as part of discovery for evidence. Presumably the chat logs will be annonymized, and the court will not share them, so it doesn’t matter that much. But I think??? it might be handled differently if it was legally protected information.
Of course if you yourself are suing your therapist you can drop confidentiality.