I’m not seeing where answering “PhD level” science questions means getting to the solution the same way humans do. The critique I’ve seen from my science colleagues on the use of neural networks to solve scientific problems is that they’re black boxes. You give them the numbers and they produce really good answers, but we’re not sure how they got to them.
I’ve seen similar complaints about the use of individual-based or agent-based models to investigate systems. The argument coming from mathematicians being that you should really be using partial differential equations instead and you’re just using brute-force instead. However, when some people investigated this, if I recall correctly, they found the partial differential equations miss out on the importance of individual histories which strongly impacts the predictions.
My view is, if AI is somehow working to give you the answers you need, use those numbers for now. Test it a bunch to see where/when it breaks. If it’s an important enough problem, keep on working to try and get a human solution. Otherwise move on. Don’t let perfect be the enemy of good enough.
I think that scientists, and actuaries, have the concept of an inferential model. I think they tend to treat “Artificial Intelligence” as a kind of inferential model, which I think is accurate. That leads to the kinds of questions you are asking, which I think are the right ones. There is room for healthy disagreement about how much the “black box” nature of neural nets impacts the estimate of uncertainty, but that can be worked out in practice.
Most business people and developers, in my experience, do not have a clear concept of an inferential model. Instead, they have the concepts of human co-workers, and of traditional software using libraries of algorithms. These are the two concepts they use to understand “Artificial Intelligence”. Without the concept of “inferential model”, how else can we understand large language models appearing to understand and answer complicated (“PhD level”) science questions? Only by supposing that the software can think like an expert co-worker can think. And that it will be able to autonomously solve problems that a PhD scientist can also solve. Never mind that you get a PhD for original research, not for doing well on a test.
I mean we know how they got them, it’s just data fitting. The problem is that the resulting function has no interpretation. It’s just giving you an estimate of f(x) where x is high dimensional. If you think that’s science then so be it, but I don’t think it is. You can attack every problem under the sun this way, it doesn’t mean you’re actually solving anything. You’re merely approximating a solution and frankly it is highly arrogant and foolish to think that the current crop of algorithms can realistically achieve a very good approximation. What we’re seeing now is that you have data x_i, i = 1…, n where n is extremely large (say billions or trillions) then you train your algorithm to approximate f(x), say you get f_n(x). Then people use f_n(x) where x is close to one of the x_i and claim “wow it works”. But this is just a consequence of having enough continuity. If you think that’s useful, then ok. But the purpose of the algorithm is to find f_n(x) that is very close to f(x) on all inputs x. This is an extremely hard problem. Solving that problem is what AI is about, what we have now is probably more of a lookup function with some continuity.
I think an example we can give (and maybe I did earlier in the thread?) is the Ptolemaic model of the solar system.
Recall that Ptolemy modeled the solar system with Earth at the center, and with the Sun, moon, and planets having circular orbits. But within those circular orbits are smaller circular orbits (epicycles).
It is what we would call an ad-hoc model. It had predictive power. However, after the discovery of Fourier analysis, we now understand that any cyclical orbit can be described with enough epicycles.
Additionally, the Ptolemaic model seemed to support many scientific conjectures we now know to be false. Namely, that the Earth is in a special place in the Solar System, and that the heavenly bodies follow different physics than Earthly bodies.
The universal laws of Newton have much more fruitfulness even if we could, I assume, still fit and predict the paths of solar bodies with enough epicycles using modern computers. Simple predictive power, alone, is not enough to explain what is modern science.
Remember, too, that these latest advances in AI have come from improving algorithms to scale to large data sets and across network computing. In other words, they have been developed by mathematicians, as best I can tell. I do not think that scientists, especially experimental scientists, have really had an opportunity to weigh in yet.
It’s kind of arrogant to assume you’re knowledgable enough to comment on this for all of science. Depending on what it’s being used for AI may be producing something that’s good enough for science or totally inadequate. You’re making blanket assertions without a lot of supporting detail. In my particular niche, it’s producing very good results. On the other hand, I wouldn’t be comfortable with using it for a stock assessment model quite yet. On the other hand, I’ll note that there are a lot of inferential models that were used for a time and then subsequently dumped for various issues. It’s helpful to remember that all models are wrong, but some are useful.
Yup. A computer simply doesn’t have the capacity for genuine empathy. Human connection is what makes education work, especially in a setting like mine that has so many students needs to consider beyond academic content.
The practice of science is to gradually and carefully add assumptions and constraints to the models. By having more structure, you get simplified models, faster convergence when data fitting, etc. That’s science. Using AI is a data fitting tool, and using only AI, without imposing any constraints based on scientific insight, is not really science.
Well, it can be on a speaker with no screen if that’s what you want. Sooner or later, it will come from an anthropomorphic robot. I hear that one from Chuck E Cheese is looking for work.
My guess is that the “escape hatch” is the phrase “artificial general intelligence as it has traditionally been understood”.
What does that mean, exactly? Different things to different people.
However, there is also a long history of more “rationalist” oriented people (as opposed to “empirically” oriented people) making rash claims about how their new method will revolutionize everything.
Notably, herbert simon in the 1960s made grand predictions about AI accomplishing tasks by the 1970s that it was only recently able to do.
And Descartes himself did the same. He imagined he might live several hundred years once he applied his scientific method to medicine.
Both of them are much greater thinkers than Altman, to put it mildly. (Not that there is any shame to be less of a mind than they.)
Aside from the fact I simply don’t trust Sam, I don’t think he’s a genius, so it’s annoying he tends to be the spokesperson for AI.
I do find it very interesting how much the real big brain researchers differ on how and what and where AI is going. I don’t think they are lying either. They just don’t know really.
“Not knowing” is bad PR and does not bring the money in for the CEOs and shareholders.
Confidently lying about shit is former+future-president-approved.