I don’t think that represents rationality, though. Instead, making mistakes is an inherent part of rationality and learning.
However, children are already building a kind of common sense that ChatGPT will never have. They know, for example, that if they touch a hot stove then they will be burned. In the future, they do not touch the stove. That specific interaction with the environment, which extrapolates from past experience based on reasons for that experience, is rationality, in my opinion.
I’ve read that one reason children are much better at learning languages than adults may be because of their limited cognitive ability. We learn only basic grammar when we are young because we have more limited abilities, but it also makes us learn it faster. As we get older, we are ready to learn the more complicated parts of language, not only because our cognitive ability is better, but also because we already know the basics. Having this additional capabilities actually makes it harder for us to learn the basics of a different language as an adult. This would be analogous to a less flexible model being better able to capture a “signal” in a noisy environment.