Artificial Intelligence Discussion

Going with the mammogram example, I think it would be a trivial effort to compare human vs AI. The radiologist (or whatever) is looking at the scan for a few specific things. Train your AI to look for those things. When you want to compare them, get your set of 10 or 50 radiologists and the AI to diagnose 100 or so scans with known diagnoses/case histories and see how often they make the correct diagnoses. You can look at percent agreement of confusion matrices. It’s possible to look at repeatability and so on.

It’s in the training data set that is used to train the AI. The AI is learning how to identify the features that lead to a particular diagnosis. What distinguishes a chest x-ray that is clear of lung cancer from one that is positive for it.

For particular uses, I think it would be feasible to train an AI to say it’s unsure or at least give an estimate of the confidence it has in an estimate e.g. a negative result with 60% certainty vs a negative result with 99% certainty. Alternatively, training it to identify positive, negative or unsure.

And one key factor is that AIs don’t generate a single answer unless you tell them to. They are estimating probabilities of different possibilities, and you could ask it to list every possibility above a certain threshold.

Or perhaps initially, you could use AI as an automated 2nd opinion generator for radiologists that won’t say anything unless it disagrees. Suggesting when a closer look is needed could significantly reduce the number of errors made.

I think the cost of GPUs and the power they need to run matter for the GPT-s being used as chatbots or make a video of a polar bear dressed like Santa pushing a shopping cart full of fish (e.g. little to no real use and is basically a novelty that will fade from public interest). At this point, it seems to be a lot of money being burned for unclear goals.

A lot of the smaller, more targeted AIs seem to provide high returns on investments.

It leaves me wondering what happens to NVIDIA if we just need a “simple” Intel or AMD CPU to get a great use out of AI.

I wonder about using the AI to identify areas of interest on scans. I can see that being both helpful and harmful (e.g. helps a doctor quickly spot the problem area, but perhaps leads them to ignore other issues outside the area of interest).

It’s funny, a decent GPU (not state of the art, but “decent”) is probably the closest thing I’ve had to a “luxury” for most of my adult life. I have never owned a nice car, or a nice house, or had good food, or new clothes, or an iphone, or anything of the sort, but I have always had at a (modest) gaming computer.

…Anyway, I am worried that we will overspend on AI resources. But the problem isn’t lack of ROI. It’s that there is very high ROI for even the most wasteful applications.

And the solution, if there is one, is the same solution to everything else-- consumption tax of some sort.

Anyway… to your topic… I read that Waymo was considering rebuilding their whole self-driving-car technology (which is fully functional already)-- around their chat-bot, Gemini. They are hesitant in part because the cars don’t have the compute to handle the implementation right now.

I am completely against AI in medicine. But there are people with investments in nvidia and the like, so they argue for it. I don’t have such investments, so I am not going to argue incessantly with AI cultist types.

Why is it that a trained radiologist can do his job pretty well while the computer needs to train on billions of data points? Essentially what people are pushing is that if an untrained person spent 10 years just studying scans, visually, he could actually become a moderately good radiologist? Pretty sad if true. The reason why the radiologist doesn’t actually need to see that many examples to “learn” is because he has actual medical training.

But yeah there is a very strong push from the tech community to destroy this concept of domain expertise and simply view every task as pattern recognition. These people are dangerous in my view.

1 Like

What is your background in the medical knowledge field to be able to conclude that AI is bad

Seems that its a bit of a knee-jerk response here.

This is precisely the problem. I’m not the one that needs to prove that it’s bad, it’s you that needs to prove that it works.

But thats the entire point.

The technology is currently being developed and tested. Will be a while before this is rolled out at scale.

Based on the people that I have talked to about it, it can provide positive value add when it comes to improving the process of diagnosing people.

But sure, the devil will be in the details. Like most advanced technologies, there will be bumps on the road.

There are people like Hinton and that coursera guy at Stanford who in 2017 claimed that “we need to stop training radiologists” and this was all based on their own research supposedly demonstrating the superiority of AI over radiologists. The arrogance of these people is unbounded.

1 Like

There are always pie in the sky statements whenever a new technology comes up.

I wouldn’t pay much attention to them.

AI will mostly allow skilled people to be a bit more productive.

We currently have a shortage of skilled people in medical fields, so something that could potentially boost their productivity (not just in the number of people they can diagnose but also how accurately they diagnose) is largely viewed as beneficial.

Thats my impression in any event.

1 Like

Still, I have reservations about AI in radiology, particularly when it comes to education. One of the main promises of AI is that it will handle the “easy” scans, freeing radiologists to concentrate on the “harder” stuff. I bristle at this forecast, since the “easy” cases are only so after we read thousands of them during our training—and for me they’re still not so easy! The only reason my mentors are able to interpret more advanced imaging is that they have an immense grounding in these fundamentals. Surely, something will be lost if we off-load this portion of training to AI, as it would if pilots turned over their “easy” flights to computers. As the neuroradiologist and blogger Ben White has pointed out, inexperienced radiologists are more likely to agree with an incorrect AI interpretation of a scan than radiologists with more experience, suggesting that in the future we will need even stronger humans in radiology, not rubber-stampers.

However, I don’t see that it’s necessarily wrong for an AI to provide indications to a human tasked with making determinations in security or medicine, as long as the human is empowered to exercise their expertise when the AI is deluded. (Especially if the AI is subsequently retrained with that additional information.)

We see this today with the use of machine-learning-based algorithms in the pricing and underwriting of niche commercial lines P&C accounts today. While the indications generated by the algorithms may not be life-and-death…they can have impacts in the millions of dollars.

It’s useful for a small number of tasks. There is a push by firms like Nvidia and the cloud providers to apply the methods to almost anything. Hence the AI hype and bubble.

The loss is probably calculable by a bunch of lawyers, actuaries, or other doctors. Maybe just look at malpractice lawsuits for example.

When looking at scale, the world is full of catastrophes like that, and we need to evaluate them.

It might be harder to evaluate a catastrophe, but it is that much more important.

…The nice thing about medicine is that it relies on science (ie. randomized control trials with measurable outcomes) Maybe not enough of a role to overcome AI-hate, but more than other industries.

Agree that it’s about framing. AI aside, we already disagree strongly whether RNs and PAs are qualified to “free up time for doctors”, with the ANA, AAPA, and AMA all writing angry letters and signing large checks to congress.

Having bots come into the picture is similar. Except people like MDs, RNs and PAs, and hate bots.

Other professions will be the same, I think. Are actuaries going to use AI-- yes-- obviously we already do, to the extent that it is reliable. Could it replace actuaries, sure if it became super reliable. Would the SOA/CAS fight it? Uhhhh…