Wtf/wtg science

I mean that children make errors that show they lack common sense, reason, a grip on reality, and they suck at math/logic.

And despite a wild imagination they struggle to come up with new ideas, in large part because they haven’t learned how to evaluate them.

I don’t think that represents rationality, though. Instead, making mistakes is an inherent part of rationality and learning.

However, children are already building a kind of common sense that ChatGPT will never have. They know, for example, that if they touch a hot stove then they will be burned. In the future, they do not touch the stove. That specific interaction with the environment, which extrapolates from past experience based on reasons for that experience, is rationality, in my opinion.

I’ve read that one reason children are much better at learning languages than adults may be because of their limited cognitive ability. We learn only basic grammar when we are young because we have more limited abilities, but it also makes us learn it faster. As we get older, we are ready to learn the more complicated parts of language, not only because our cognitive ability is better, but also because we already know the basics. Having this additional capabilities actually makes it harder for us to learn the basics of a different language as an adult. This would be analogous to a less flexible model being better able to capture a “signal” in a noisy environment.

Who lets their children touch hot stoves! Terrible parents!

one stove for each point less than 100!

1 Like

No terrible parents are the ones who laugh when it happens !

Anyway, yeah, maybe we’ll need to give AIs memory, or agency, or senses. I dunno.

Well after it happens, nothing to do but lol. :wink:

Okay, V’ger.

1 Like

My what a year it has been.