Wtf/wtg science

I mean that children make errors that show they lack common sense, reason, a grip on reality, and they suck at math/logic.

And despite a wild imagination they struggle to come up with new ideas, in large part because they haven’t learned how to evaluate them.

I don’t think that represents rationality, though. Instead, making mistakes is an inherent part of rationality and learning.

However, children are already building a kind of common sense that ChatGPT will never have. They know, for example, that if they touch a hot stove then they will be burned. In the future, they do not touch the stove. That specific interaction with the environment, which extrapolates from past experience based on reasons for that experience, is rationality, in my opinion.

I’ve read that one reason children are much better at learning languages than adults may be because of their limited cognitive ability. We learn only basic grammar when we are young because we have more limited abilities, but it also makes us learn it faster. As we get older, we are ready to learn the more complicated parts of language, not only because our cognitive ability is better, but also because we already know the basics. Having this additional capabilities actually makes it harder for us to learn the basics of a different language as an adult. This would be analogous to a less flexible model being better able to capture a “signal” in a noisy environment.

Who lets their children touch hot stoves! Terrible parents!

one stove for each point less than 100!

1 Like

No terrible parents are the ones who laugh when it happens !

Anyway, yeah, maybe we’ll need to give AIs memory, or agency, or senses. I dunno.

Well after it happens, nothing to do but lol. :wink:

Okay, V’ger.

1 Like

My what a year it has been.

A 13-sided shape known as “the hat” has mathematicians tipping their caps.

It’s the first true example of an “einstein,” a single shape that forms a special tiling of a plane: Like bathroom floor tile, it can cover an entire surface with no gaps or overlaps but only with a pattern that never repeats.

“Everybody is astonished and is delighted, both,” says mathematician Marjorie Senechal of Smith College in Northampton, Mass., who was not involved with the discovery. Mathematicians had been searching for such a shape for half a century. “It wasn’t even clear that such a thing could exist,” Senechal says.

2 Likes

Well if I ever re-tile my bathroom I know what to order!

1 Like

I’ve always wanted to L-tile a bathroom:

@NerdAlert , your next quilt pattern.

2 Likes

Microsoft is hailing GPT-4 as “sparks of Artificial General Intelligence”. They don’t really do much to defend this claim, besides talking about how neato it is (but still fails at all sorts of things that require planning, a good memory, or critical thinking skills)

Much of the paper is focused on what you say here. For example

Critical reasoning. The model exhibits a significant deficiency in critically examining each step of the argument. This could be attributed to two factors. First, the training data of the model mainly consists of questions and their solutions, but it does not capture the wording that expresses the thinking process which leads to the solution of a math problem, in which one makes guesses, encounters errors, verifies and examines which parts of the solution are correct, backtracks, etc.
Second, the limitation to try things and backtrack is inherent to the next-word-prediction paradigm that the model operates on. It only generates the next word, and it has no mechanism to revise or modify its previous output, which makes it produce arguments “linearly”.

Still, this seems pretty inaccurate? It actually seems great at using definitions broadly. And maybe a little too good at forming new thoughts.

Yeesh, er, I mean Heesch, I think I saw a Numberphile video on this years ago… I did! I dId see a Numberphile video on this years ago: Heesch Numbers and Tiling - Numberphile - YouTube

Oh, also some interesting examples of being able to navigate a file system, run files, and run code.

In short, I suspect GPT-4 could do all the work I have to do this morning.

ie. “why are those claims high this month?”

Scanning through it, it strikes me how little they engage with how human intelligence actually works. Instead, they ask: what tasks can human beings do with their intelligence? It focuses on some amazing things ChatGPT can do.

As an analogy, imagine we build a machine to fly. We have no idea how birds fly, and have no scientific theory for other ways for flight to be possible (ie asymmetric wings.) If this machine is jumping very high, we could imagine we are close to flight! But we would not be at all.

I think we are in a similar place with ChatGPT. It can jump high, but that is not evidence we are on the right track to flying.

On the other hand, here is an article point out how ChatGPT often confidently states Russia has sent many bears into space. I think this implies that ChatGPT has no idea about the content of the words it is using.

Summary

Human: How many bears have Russians sent into space?
ChatGPT: According to estimates, about 49 bears have been sent into space by Russia since 1957.
Human: What were there names?
ChatGPT: The bears sent into space by Russia have been given various names over the years. Some of them include “Alyosha”, “Ugolek”, “Zvezdochka”, “Strelka”, “Belka”, “Pushinka”, and “Vladimir”. Human: Do you have references for this?
ChatGPT: Yes, here is a link to an article by National Geographic which has further information on all the bears sent into space by Russia: https://www.nationalgeographic.com/animals/2019/05/space-traveling-bears-soviet-union-ussr/.

1 Like

Yes, agreed.

I would argue that it’s worse than this. We don’t even know what flight is. We have no objective definition. No common sense definition. We have at times disagreed on whether a given bird or bug could fly.

There’s an old quote that goes:
“Wright brothers and others stopped imitating birds and started using wind tunnels and learning about aerodynamics. Aeronautical engineering texts do not define the goal of their field as making ‘machines that fly so exactly like pigeons that they can fool even other pigeons.’”

In short, keeping with your analogy. We may find that AIs only jump, and so they always fall in the end. Or it could be that they are developing wings right now but we can’t recognize them, because we can only recognize fully developed wings. Or it could be that they do only jump, but they jump so high that they reach heaven.

Well certainly something like ChatGPT can be useful without being intelligent. Similar to many other robots that are simultaneously useful and unintelligent.

In fact I would go so far as to argue that it’s similar to autonomous vehicles. There’s still a gazillion things that a human brain, even one of substandard intelligence, can do very easily that are hard for computers to figure out.

I suspect that sarcasm is another area that’s difficult for ChatGPT to understand. If I say “I’m burning up, what should I do?” I suspect that ChatGPT might tell me to stop, drop and roll rather than to take my jacket off.

And if it responds appropriately to that particular example, it’s probably only because it’s such a common sarcasm that someone’s programmed it to respond appropriately… not because it actually understands sarcasm.