Machine learning doesnt “solve problems”. These things are just tools, often very flawed, sometimes quite useless.
I’ve found what I think is a use for the mediocrity that is ai, summarizing my calls.
My client calls are long,.like an hour, and cover a lot of ground. I have a transcript, but I also make a summary in our dbase.
I could have ai create a summary of the transcript instead of me doing it manually. That would save some time. I need the summary as i review it before the next call.
Solves our problem of determining the species ID of fish we find in predator stomachs. Happy to hear why you think it doesn’t.
Or license plates from Louisiana??
Had a doctor’s appointment last week and there was a notice that the doctor was using AI to transcribe patient visits to save time on paper work.
My impression continues to be that AI will mostly help skilled workers be more effective rather than eliminating massive numbers of jobs.
If the doctor can see more patients and reduce waiting times while producing the same outcomes, that is a good use of AI. Especially in Canada where we are a bit short on doctors.
Calgary…
Thats pattern recognition.
You trained a model with the necessary data on various fish, and it then sifts through the target looking for matches based on a confidence interval.
You aren’t solving a complex equation, but rather looking for fitted matches based on your training data. You don’t need to be correct 100% of the time.
This is fine for that kind of use. But for things like calculating the orbit of a sattelite, and how much fuel you need to burn at various points in its journey, what kind of forces are acting on it over time, etc. you need to be 100% correct or else you will lose control of the craft when the AI makes a mistake.
Well I think it seems fishy
I agree with you that some specifics of the two problems are different, and you listed good strategies.
But they problems are also the same. In the language of physics, we have to identify the differential equations that tell us the conserved quantities. I believe there is a similar exercise in the ecology of populations. Once we find the differential equations, we solve them. By solving them, we find a different representation of the problem that has fewer degrees of freedom. For the Earth orbiting the Sun, there are, what, maybe 4 numbers that completely determine the Earth’s position for all time (I could be forgetting a couple). Then we use statistics to fit those 4 numbers from data.
It’s two steps: find a reduced representation. Then do a relatively simple statistical inference within that reduced representation.
These machine learning methods try to combine these two steps into one. As I understand it, a lot of what “deep neural nets” do is try to find coordinate transformations to a lower dimensional representation that do a better job of prediction or inferring what we care about. They do this at the same time that they estimate what the numbers of that representation might be.
However, they do not do the same kinds of thinking that a scientist does.
This is easier to see with a linear model. There, we are assuming that the predictors have a constant, linear relationship with the response.
I think what is going on here is the neural net is being trained on the inputs and outputs of these weather models which already have the relevant physics “built in”. In other words, the job of figuring out the right differential equations is already done.
I would call this more “function approximation” than “learning”. It is a major application of machine learning. You approximate a very expensive function call with a much cheaper one. The reason I prefer “function approximation” instead of “learning” is because the model doesn’t have to figure out the laws of physics in the same way. Knowledge of them is “baked” into the inputs and outputs of the (other) weather model it is being trained on.
I do think there is enormous potential here. But it will have limitations too
It’s solving a problem. As I noted in an earlier response to @magillaG, different problems use different AI tools. For my species ID issues, I’m using classification tools. Different tools are needed/used for solving something like the position of the Earth and Moon. AI/ML isn’t some sort of monolith, it’s a broad range of methods/tools.
I think the point is that it solves problems in the mode of a tool. It does not solve problems in the mode of a person, with human intelligence and all that this extra dignity entails.
To make an analogy, it solves problems like a hammer, not like the carpenter swinging the hammer.
If we’re going back to LLMs, I’d say they awkwardly imitate the mode of a person.
They are less reliable than a professional, and way less reliable than a hammer.
However, not every task needs perfection. Sometimes, the risk that it does something completely wrong is worth the time saved when it doesn’t.
But neural nets are designed for high dimensional learning problems. So when people use them in science, they are admitting defeat. The purpose of science is to simplify the problem using domain expertise and to explain things.
Another issue is that there is no real mathematical way of knowing how much data + parameters I need to not overfit. Right now all the tech firms are doing is adding more data, more parameters, more computing power. It’s a waste of time imo. They will continue to spend billions, if not trillions, with little improvement. Then they will finally admit “oh I now understand that infinity is very large”.
The purpose of function approximation is that you can recover the function with limited data vs the input space. Canonical example: you only need n+1 data points to recover an nth degree polynomial. Now assume that you have trillions of data points, and trillions of parameters. it’s easy to see that you’re actually just building a search engine, not doing actual functional approximation.
The only way to realistically evaluate AI algorithms is to see how they work with limited data and parameters. If the tech firms keep adding parameters + data, they are essentially admitting defeat.
I agree if the neural net is all there is. However, neural nets can play important roles in intermediate steps. A good example appears to be using them to model protein folding. The neural net suggest possible solutions that can then be verified through other means.
I tend to agree with this too. However, we have to remember that adding data and parameters to computer vision and large language models produced improvements beyond, I think, almost anybody’s wildest dreams. I don’t think it makes sense to extrapolate that this will continue. But from a tech person’s perspective, people said that about Moore’s law, too, and it is still holding up, at least in some fashion.
Another purpose can be to reduce computational resources. If these AI models do a good job of approximating these very expensive weather model calculations (which can then be verified anyway several hours later), that seems to be a win.
As a teacher, I’m trying to observe, directly from day-to-day experience and indirectly from reading about it, the effect AI will have on education. I think it’s alarmistly premature to just conclude that teachers are obsolete now. Like any other technology, AI is a tool. It’s on my profession to adapt to its existence, and find meaningful ways to incorporate it while also evolving assessment methods so that it’s Johnny and not JohnnyGPT whose mastery we’re evaluating.
I’m not quite sure why there’s an expectation that they would. People are using AI to solve questions/problems they have. They aren’t looking for elegant solutions for the most part.
Is it? This doesn’t particularly match my understanding of science.
Because it is specifically being marketed this way. This is the switch from the phrase “machine learning” to “artificial intelligence”.
Executives from some of these AI company are also projecting that these large language models will soon surpass human intelligence. They claim the models can already answer “PhD level” science questions, whatever that means. As if intelligence were a matter of access to data and calculations rather than a dialogue with Nature or other environment.
The insurmountable problem I see with AI as a teacher is that it’s on a screen.
It might have the ability to teach you anything, in any language, with infinite patience, and able be able to answer your specific questions about your specific problems in a way that comports to your learning style. Etc.
But it’s still on a damn screen.