Wtf/wtg science

Sorry, age-related cognitive decline is a thing.

inferior species

I think we have to remember it is like a student who is very good at memorizing definitions, but has absolutely no idea about the content of those definitions. For example, knowing (per wikipedia) that the aorta is ā€œthe main and largest artery in the human body, originating from the left ventricle of the heart and extending down to the abdomenā€ without knowing what a single one of those words mean. You can answer that question on the test, but be completely unable to form any new thoughts, or practically use that definition in experience in any way.

1 Like

Is this your presumption or fact? Iā€™m not sure even the creators of AI know that this is true.

Human consciousness is an emergent property from instinctual regurgitation of behavior. Hard to say that the same wonā€™t emerge for AI.

I think we can very confidently say this is true. Thatā€™s partially based on my knowledge of deep neural nets. But it is very consistent with what I have read by people who are experts on thinking (as opposed to just programming these models.)

To your point, the degree that human language has no absolute meaning, it is still all ā€œanchoredā€ in our embodied experience. This model does not have that.

I think there is a modern inclination to think of ourselves as disembodied ā€œsensesā€, taking in data, and then trying to process it according to some absolutely true rationality. For example, Bayesian decision theory is one contemporary candidate for that absolute rationality. Under that view, it is easy to slip into the idea that if we simply build a fancy enough ā€œthinkingā€ machine, getting ā€œdataā€ from training instead of senses, then we can make something that thinks like us, or better than us.

I think the reality is that we are embodied beings interacting with a changing environment, and the nature of that interaction is incredibly important to our rationality. Under that view, I donā€™t know we could ever expect something like ChatGPT to acquire something like ā€œgeneral intelligenceā€, accidentally or otherwise.

I agree that without physical sensations, chatGPT is limited in what it can infer and conclude.

A mind without a body is significantly more useless.

Iā€™d argue itā€™s not just about physical sensations.

Itā€™s about actively engaging with the environment.

Remember that critical to modern science is, first, recognizing that the world is contingent, and second, actually interacting with it to determine those contingencies. It is a back-and-forth.

Sometimes this back-and-forth gets forgotten. Laplaceā€™s daemon is an example of a hypothetical being who can get fed information once (the location and velocity of every infinitesimal particle in the whole universe) and then knows everything by virtue of knowing the laws of physics. This seems like the closest match to what people imagine building with something like a ChatGPT that acquire general intelligence.

But Laplaceā€™s daemon is impossible, particularly as a limiting case of finite intelligence. There is no way to predict what will happen knowing just physics. You have to interact with the environment to test ā€œhigherā€ laws of science. Nobody disagrees with this, even the scientists who push a kind of reductionism like Weinberg or Dawkins.

But ChatGPT sort of has this capability, in that you can ask it subsequent questions that relate to previous questions, and itā€™s able to respond appropriately.

To my mind, that is a different sort of back-and-forth.

I have in mind a back-and-forth with the object of knowledge. In the formal physical sciences, this means doing experiments. But in everyday life, there are many small things we do to build our notions of causality, common sense, etc. Part of how we understand the meaning of ā€œchairā€, say, is by sitting in a bunch of different chairs. You and I can have a back-and-forth on the chair, but usually that will depend on our common experience with different chairs. This is part of how we understand the content of that conversation.

The back-and-forth with ChatGPT is with the user, who is not the object of knowledge.

Do you have a way to test this hypothesis?

Itā€™s certainly true that has memorized an entirely absurd amount of information, but itā€™s not clear to me if it can have a ā€œnew ideaā€, or what that means really.

I donā€™t subscribe much to the notion of lived experience. Mathematicians donā€™t need to live to make up new math. And scholars and creatives mostly just read a lot, then iterate on what theyā€™ve read.

A ā€œnew ideaā€ for them is a random assembly of old ideas, that turned out to be meaningful / true / useful / beautiful.

One way is to look for strange errors. One example from images would be when a model designed to create a novel face does something like creates an ear with two earlobes, or similar. This shows that the model does not understand the essential features of a face at all. Instead it is doing a kind of empty prediction.

I believe ChatGPT make similar kinds of errors. It does not matter if it almost never makes the mistakes. Their presence at all shows that it is not doing real thinking.

Usually, when I hear that term, I think of somebody arguing against objectivity in truth. In other words, arguing that some of us have unique access to certain truths because of who we are. So maybe I cannot really understand misogyny unless I have a womanā€™s ā€œlived inā€ experience. That isnā€™t really what Iā€™m arguing for.

Instead Iā€™m arguing that for us, embodied activity seems to be a necessary component of our learning, and that this makes it more likely that an artificial ā€œgeneral intelligenceā€ similar to ours would also need to be embodied.

Math is an example in which we can circularly define a set of postulates and then define new stuff.

However, I think that when a person actually does math, they usually need add content to those postulates. Maybe a line can be defined in terms of points, and a point in terms of a line. But we still have an intuition for it that gives more content that that. And that intuition is based in our embodied experience.

Additionally, there is a freedom even in math, as long as the system is complicated enough. Iā€™m thinking of the free choice axiom, for example (which I donā€™t pretend to fully understand.)

I suspect that any new idea is going to have personal meaning to them, as part of their embodied and value-laden experience. There is a creative vitality there that simply isnā€™t in a machine like ChatGPT.

And to be meaningful to the rest of us, it has to have similar impact. Genius art or idea is recognized as such because it changes the meaning of our lives. It is not jus at random combination of existing ideas.

ChatGPT will never know what tastes good, because it doesnā€™t have a mouth, it can only reiterate what other people tell it that taste good.

Not only does it not have a mouth, it has no senses at all. So sure, from that perspective, it can tell you less about the world than a human baby.

I donā€™t know if thatā€™s a sign of lesser intelligence though. Because if it did have senses it might be able to do just as well as humans, but we canā€™t know this at this point.

I donā€™t think this really tells anything, besides that it doesnā€™t think in the way we expect an average human grownup would think.

Which is pretty much 98% of the content on the internet right now. Thereā€™s a whole industry built around reading articles found on google and regurgitating them into the same information but with different sentences and then republishing them.
In fact, people that work in that industry are already crowing about how much easier it is going to be to produce yet more generic, pablum content.
Iā€™m very much alone in producing actual original and unique content in the life insurance industry online.
And arguably, thatā€™s pretty much all information anyway. Even actuaries are hardly producing anything new in their work - itā€™s almost all just very complex regurgitations.
PhDā€™s and researchers, maybe. But other than that, most of us are hardly contributing anything new to the common knowledge base. Weā€™re here enjoying our experience, which maybe an AI canā€™t do. But in terms of output, people donā€™t really offer anything unique.

I think it shows a complete lack of common sense, which means it will not be reliable in many situations.

It is still an intelligence, but one that has no inherent rationality. It is like our ability to recognize a personā€™s face. That is a learning capability. But that ability is not sufficient for our rationality, or even necessary. There are ā€œface blindā€ people who do just fine.

In other words, it is different in degree but not in kind from something like a linear model. That model also exhibits a kind of learning.

We have an intelligence that is different in kind from both of those.

I think we have to be careful about what we mean about this.

Generally actuaries are not producing new ideas that radically alter the way we look at the world. But almost nobody does that. And even those people who do, in my studies it is generally much less of a ā€œjumpā€ that initially appears. Often they slight reconfigure existing ideas in ways that have radical implications.

However, every person does help create the future in ways in ways that create and reflect some kind of normative values. This is a kind of significance and novelty that is infused in everything that we do. It is not recognized in the same way and intellectual or artistic, because we like our heroes.

This, too, is closed to something like ChatGPT, which is not self conscious and therefore cannot have these kinds of normative values.

ChatGPT certainly lacks rationality. But then children also lack rationality. I think it will take a little more time to see whether these things are hopeless on that front.

Similar to learning how to draw hands. It might just be rationality is something that is difficult to imitate, and that we ourselves are just better at imitating rationality than AIs.

This, unfortunately, is unprovable, nor do we even have a definition for it.
But one theory is that consciousness has to be tied to some senses. In the absent of senses, itā€™s hard to know if consciousness is possible.

This is not really what Iā€™m arguing. That is still consistent with looking at ourselves as disembodied senses, and then rationality is something that happens with the information received through those senses. A person could be born without a sense of taste or smell and still be a perfectly functional human being (so far as I know.)

But suppose we fastened a person into a hannibal-lector style full body restraint, and did not answer any of that personā€™s questions. Instead we simply fed the person information. Iā€™m skeptical such a person could really learn and become a full human being that way.

I tend to think children are rational (maybe not infants.)

They donā€™t have the same kind of self-critical rationality as adults. Nor do they have as fully developed a sense of self.

But even young children know which parent is more likely to say yes to ice cream for example. And they can self consciously ask that parent for ice cream when they want some.