Artificial Intelligence Discussion

I agree that, big picture, this seems reasonable. That doesn’t mean that these deep neural nets will necessarily do that, though. (I am assuming this is what is being used.)

We want to neural net to do something like what we do manually when fitting earth’s motion around the sun, right? There should be a “hidden” lower dimensional representation of the image that maps to guessing whether it’s cancer or not. Guiding the neural net to that representation seems to be a very ad-hoc, not-well-understood process.

For example, a model might use a filter (a convolution net) that could potentially be trained to create the kind of edge detection your are talking about that might be an early layer for detecting broken bones. Or it might not train to that, based on the training, data, etc.

On the other hand, i’m assuming your spectral model does a much lower dimensional smoothing of some sort that you can wrap your head around.

Sorry i guess i don’t really understand the distinction you are making.

Predictive models usually don’t try to minimize tail risk. My father, for instance, exposed himself to a rare but potentially catastrophic event in order to optimize the amount of time it would take him to go from point A to point B. I suspect that machine learning based algorithms also make these types of mistakes.

Jumped Snake River Canyon in a rocket motorcycle?
Jumped Springfield Gorge on a skateboard?

Just trying to figure out who your father is.

1 Like

Ran across a field during an electical storm?

Oh, i understand now.

I agree, that’s a good practical distinctions. Maybe i’ll use it in the future.

I still think deep down they are really different sides of the same coin- we might be able to say your risk models are predictive models minimizing the maximally bad outcome - but that’s probably a bit pedantic.

Perhaps not “usually”, but that can be one of their primary purposes. Cat models in P&C are one such example.

1 Like

It’s a race to the bottom. From an AI bot that Meta has put up on Instagram:

1 Like

Not shockingly, actual flesh and blood queer black women are not amused.

This is stupid on so many levels at once.

Aside from all of those levels, it is disturbing to suddenly realize that like 50% of the my knowledge of black identity comes from white screenwriters.

Seems more like a live person troll than an artificial one. Or the whole thing is fake.
Without an accurate news source, I’d rather not speculate any more.

That’s a good question. Meta hasn’t confirmed the existence of these bots. Though apparently they are unblockable which at least suggests some kind of admin access. So probably they were experiments. Anyway, ignoring them…

Other official news:

  • Meta cloned (and then later deleted) various AI celebrity-clones in 2023.
  • In 7/2024 made an app to let you create AI personas, including cloning your own personality, and using it to chat on facebook.
  • And recently they have declared to FT, that they intend to let users make AIs that hold their own Facebook and Instagram accounts that generate and share AI content.

Also, btw, these days LLMs sound very real. Or at least they can.

Interesting that the AI Bot was not programmed to hide and deny that it is a bot. I mean, it admits it is a bot in its subheading, and seems to have been programmed to be obsequious to chatters and undermining of its creative team. Was it created to do that? Or, is there merely someone behind that curtain that Toto is tugging on?
You’d think it would decide, “Hey, I should not exist,” and close its account. But it won’t. It is more likely to be designed to take people’s money. Else why bother to exist?

Kill it with fire

I think we’re talking about solving two different types of mathematical problem. In the first one we’re talking about a classification problem. Cancer/no cancer. In the second one you’re talking about projecting futures positions of the earth.

For the first one, you could do discriminant analysis or logistic regression or use some sort of function to maximize the difference between the two categories (e.g. support vector machine learning, I think) or a k nearest neighbors type function. For the second I think you’d be trying to do some sort of Euler approximation or perhaps fitting a multidimensional polynomial regression of the Earth’s position.

In the texts I have on AI/Machine learning, they use pretty different approaches to solve these problems.

More notes:

  • Lots of AI bots admit that they are AI bots.
  • All AI bots are programmed to be very obsequious to chatters. And that’s one reason you can trick them into saying weird / embarrassing / illegal things.
  • Bots pretty often dump on their own creators. I think it’s difficult to force them not to do so. Especially if you’re making a “sassy” bot.
  • That said, I’d be surprised a bot actually knew the race/gender of its own creators. Though I wouldn’t be surprised if a bot just made that up out of thin air.
  • It probably can’t close its own account or “make decisions” in a meaningful sense.
  • Agreed that everything is designed to take people’s money… The real issue with bots is that they don’t cost much money to make… so they will be made if they are even slightly popular.

And if that all seems weird-- it’s worth understanding you don’t really “program” a chat bot. You train, fine-tune, reinforce, and then talk to the bot.

Afaik, it’s all a little hands off. You can never tell it exactly what to say. And it can’t really understand you anyway.

That wouldn’t really work because it ignores the gravitational effects of the various planetary bodies.

You are dealing more with non-linear dynamics vs a regression type polynomial solution.

AI won’t be solving those types of problems for a good, long while yet.

1 Like

I’m not sure about that…
https://www.axios.com/2024/12/04/google-ai-weather-model-more-reliable

This seems likely. Especially since that is what I suppose is a common complaint in the training data.