Artificial Intelligence Discussion

Hopefully a convolutional neural network rather than a convoluted one.

1 Like

This is the hard part. You basically have to hit a vast number of patients with a standardized suite of tests regardless of their condition and then correctly diagnose them to build the fit data set. Thereā€™s information in both the positive and negative results. You also need to see the distribution of values across people, possibly in relation to their own physical details, for tests that produce results on a continuum rather than positive or negative. This is going to be extremely expensive. Youā€™ll likely need 100s of thousands of patients to do this well.

What it comes down to is that itā€™s a lot easier to get an AI to correctly answer, does this mammogram indicate Type whatever breast cancer? Vs. does this mammogram indicate breast cancer? Vs. here are the patientā€™s symptoms, whatā€™s wrong with them.

1 Like

Probably

AI will create the hardest bugs imaginable. The problem with AI is that it always give you something even when it doesnā€™t know what to do. This is extremely dangerous.

I am firmly against AI doing anything related to medical diagnosis. If you think the work of a radiologist is purely pattern recognition then something is seriously wrong with that profession. AI has no medical training.

It is very much pattern recognition.

In many cases you can use AI to backstop the radiologists (who do miss cases)

Humans make mistakes too because of operational reasons (tired, distracted etc)

I view using AI in this context as a good thing.

You can use AI to look at 1000 x-rays and have the edge cases seen by a human.

This would make a skilled person much more productive (and accurate) vs going through 1000 x-rays themselves.

This is related to what @Fish_Actuary was referring to. AI can make skilled people more productive.

3 Likes

AI is only good for tasks where small errors donā€™t lead to large loss. Take the example of the robot vacuum. A small error can lead to the vacuum getting stuck somewhere which is essentially catastrophic breakdown in the context of the task.

AI is good for things like detecting spam or doing basic level chatbot stuff for customer service. If you are using AI to do coding work that is critical to airports or security, you are an insane person.

Iā€™m firmly against using AI in medical diagnosis.

Iā€™ve fiddled around with chatgpt trying to do basic probability and itā€™s incredible how good it is at making you think itā€™s right. Even an average student of math couldnā€™t detect the error.

The Canadian healthcare system disagrees with you from what I have seen.

They are very much pro-using AI in this way.

How else are you planning on improving productivity and care delivery?

Specially in rural areas.

Productivity is meaningless in the context of proper diagnosis. You need to get it right. The biggest problem with the medical profession is that they make too many errors. If you want to improve things, you should look at improving training or even changing who gets into medical school. Being able to memorize stuff doesnā€™t make you a good doctor. That would be a basic start.

1 Like

If they make too many errors, then an Artificial Intelligence which makes fewer errors is an improvement (with severity of errors also an important consideration. But trained humans also make some fatal errors).

Surely a big part of radiology is pattern recognition, and the more experienced the radiologist, the more likely they will know what theyā€™re looking at. AI will have a vast database of images and will bring back the images that have the highest percentage match, including where the image may be rotated along the plain or to/from the camera. The radiologist can use their knowledge to determine which one/s looks to be the same condition or discard them all entirely and go with something else.

If it turns out that both AI and radiologist did not come up with the correct diagnosis when new information comes to light, that new information can be added both to the database and the experience of the radiologist. Itā€™s a tool and if it makes the radiologist better at their job, theyā€™ll use it.

There is no way to evaluate AI vs human in the context of medical diagnosis. Whatā€™s the loss associated with a patient who has swelling in their legs and the AI says itā€™s because of allergy when the patient has kidney problems? Itā€™s an effing catastrophe.

AI algorithms are just pattern recognition gadgets. They are trying to find a functional representation (input ā†’ output) using many features. Do you think this is how a medical doctor should make a diagnosis? Where is the scientific training? Where is the medical knowledge? Itā€™s just a black box. There are scientific ideas that should be guiding a diagnosis. The images that you see represent some disease but the classification should be based on some logical or scientific principles.

All these political truths lately fit the not funny part of the thread title, a bit unclear on the first halfā€¦

For sure, thereā€™s a million different specialized AIs that do all sorts of different things. And among them are specialized chat-bots, which try do do one specific thing cheaper, or just fine tuned to your data. GPT gets all the attention, but if you look broadly thereā€™s just so much happening at the same time.

The general chatbots also vary a lot cost, with more expensive ones being ā€œbiggerā€ or ā€œthinking harderā€. Even if weā€™re just talking ā€œGPTā€, thereā€™s now 4 at least cost-tiers of models?

On the other hand, if weā€™re talking aiding or replacing workers who bill $200/hour, it doesnā€™t make a lot of sense to worry about savings money on GPUs.

So, you want to see the AIā€™s diploma?

Or their supervisor?

www.karen.ai

2 Likes

I donā€™t think the work of a radiologist is totally pattern recognition, but I suspect that a lot of it is.

I kind of wonder if AI for general medical diagnosis would be better targeted at providing prompts with probability scores and perhaps some sort of diagnostic testing priority list to try and get to a correct diagnosis quickly. The difficulty for this is developing the training data set. On the other hand, I think of all the stories I hear about women struggling with endometriosis and it seems so obvious when you hear their stories. Why does it take so long for their doctors to get there? Is it doctors are getting too fixated on other possibilities? Or just not thinking of endometriosis in the first place. I think the saying is hearing hoofbeats and looking for zebras instead of horses.

For x-ray, MRI, etc., the AI is generally only as good as the data set used to train it. On the other hand, Iā€™d think you could train it on more images than a radiologist could look at in a thousand or perhaps 10s of thousands of lifetimes.