I’m guessing 30% of that is “cheating in school.”
10.2% “tutoring or teaching”, 18.3% “seeking information - specific info”, 3% “mathematical calculation”, 4.2% “computer programming”, 28.1% “writing”…
I’m not certain about the demographics of ChatGPT users, but I could believe 15-20% is “cheating on schoolwork”; more if the userbase is skewed heavily towards school-age.
I sent this to my granddaughter and she was so excited asking if this was actually real. She recently did a class presentation on the history of 6 7 and as part of that my wife got her a shirt with 6 7 on it.
So I did some poking around and it seems that it is true.
Once I tried to get Claude to solve a logic puzzle. I forget the details.
It guessed wrong. So I asked “are you sure?” And it was like “let me check… oops that’s wrong.”. And then it would guess wrong again. Over and over.
So I suggested that it create an artefact with some code that would solve the problem, and then run the code.
And it did. Except the answer was still wrong.
Looking at the code though, it was actually fine. I ran it myself.
The problem was that the AI was hallucinating that it could execute code, and was instead guessing what the output should look like.
So far the worst problem I had with the code AI has provided is it used a reserved word as a variable, so it wouldn’t compile. I haven’t taxed it too much in creation mode, just snippets so far. I’ve had it check syntax and write comments mostly.
I emailed compliance last week to confirm that the use of public facing AI was not compliant for any purpose, and to ask about the framework that would be compliant.
They responded with ‘just make sure to anonymize the information, and that’s compliant’.
Ooook, so that triggered a huge long email where I give examples of how a couple of anonymous prompts can specifically identify a single individual in the entire country, and that one additional anonymous prompt can divulge sensitive client health information. Then kind of asked them if they’d like to reconsider their position.
We will see. Knowing compliance, when faced with facts that they don’t want to argue, they’ll ghost me. It’s happened before when I’ve challenged their assumptions.
Wild times in the hardware field.
We had llama 3.3 running on an AWS AI instance. Working fine, along with a bunch of other models.
I wanted to run llama 4, won’t run on our current instance, not enough horsepower. So we had to do a customer service request to a different instance with more gpu oomph so we could install 4. Took a couple days, got permission.
And now we’re getting the message that our geographic locale doesn’t have enough capacity. So…no llama 4 unless I drop 150k on a server I guess.
Which is fine, llama 3.3 works great in almost every circumstance I’ve seen but one. Just weird that amazon can’t run a current model for me.
