GPT passed the Turing Test. In 5 minute chats, GPT 4.5, pretending to be a chill dude who doesn’t bother with punctuation was more human than actual humans 70% of the time.
This is unsurprising to anyone who has ever used a chatbot, but still I’m kind of surprised it didn’t receive more fanfare.
If nothing else, it is the last time in which we will be able to reliably identify bots on the internet.
But how many would really understand the Turing Test?
Also, there is nothing in that article that describes the qualifications of the “participants/interogator” to determine AI. So is it the case that AI can “fool” mostly naive people? Or can it also fool an ardent skeptic that might have ways to do a test between AI and another human stranger of a culture different from any experience of the “interogator”?
Note to editors: If you are going to use ChatGPT to write a listicle for you, you should make sure that the items on the list exist. From Sunday’s Chicago Sun Times, a pick of summer reading books that don’t exist, but at least are written by real authors. Oddly, the ones near the end are real books, it’s mostly the start of the list that is hallucinated.
There’s a link to the full study (pdf) in the my link. The humans are not qualified, one pool is random, another is undergrads. The undergrads who talk to AI all the time did better, though I don’t know if they did better specifically against GPT 4.5, or just better in general.
The bots also varied a lot. Some were told to act with a personality. Others just played the game. And they threw in an Eliza for good measure.
I think an expert could probably win. Also the conversations are very short-- like 10 texts back and forth, with half of the messages being “fr bro lol”
Lewis Black reports on the current state of ChatGPT on the Daily Show. (Caution: lots of bleeps were apparently required to make this legal for broadcast on basic cable.)
I am a hot minute away from launching a life insurance chatbot.
I took a ton of information from trusted sources only, and used that as the ‘context’.
Initial testing was disappointing. Then we realized it wasn’t using our context it was basically using ChatGPT. So we switched that around and it still had a few problems. Then we tweaked priorities on some of our data sources and…yesterday the results I was getting were either amazing or once in a while, pretty darn good.
I’m curious why you say that. From what I’ve seen, the prices are very low compared to salaries, and while some might be subsidized, it also doesn’t cost much to run an open source model internally.
But you’re the only guy I know actually running an LLM, so wtf do I know.
The short answer is, you can’t run ChatGPT internally ‘cheaply’ or really at all.
You can access it through an API at absurdly inexpensive prices - that’s what we’re doing. But that requires a developer, which isn’t what most people are going to be doing.
Most people of course access it through the ChatGPT website and the prompts that are there, which is free of course right now. All you folks using it at work, you’re going to the chatgpt website, right? And the guy in the article:
opting to run social-media copy through ChatGPT instead.
again, running it through the free ChatGPT prompts on their website.
When and if that gets pulled or limited, people are going to have to start paying. And possibly, paying a lot. If the person in the article is claiming that they save the cost of an intern, what’s a reasonable cost for chatgpt access? $1k/month? $500/month? Even at $100/month a lot of people won’t use it.
Granted, maybe they don’t ever charge for the access. But, I’m sceptical of that.
My company runs an internal model, to avoid data leakage. My point is not that you should do that (especially not on your own pc) but just that OpenAI (Google, Anthropic, Deepseek, Microsoft) are not keeping their prices artificially low. It really is quite cheap.
Anyway, I think if a model is actually able to replace an EL worker-- which who knows-- then we’re talking about like $20/month for $5k/month in savings. It’s so insanely cheap compared to paying a human being, that a little variation won’t matter.
I agree that lots of people use AI just just for funsies, or to cheat in high school. And yeah, they might cut out if the free tier disappeared. Though I expect there will also always be a very-very-cheap tier available.
I suspect theyre running one of the models through an api. I dont think you can run chatgpt locally, and its not an easy task to take the model and train it yourself.
Or theyre running a very scaled down training set. Which is sort of what im doing. I have my own training data that we draw answers from, then we let chatgpt interpret the answers.