well said
Itâs important not to reverse engineer human intelligence based on your religious belief in AI algorithms.
I donât know what you mean by that. Are you suggesting I think AI has human intelligence?
I donât think AI is human at all.
I donât really know what intelligence is. I donât think anyone does.
I do think we will one day invent a machine that is intelligent, but it wonât be human. It will be something else. And when we do, we will have a hard time recognizing it, because we have no definition for inhuman intelligence.
I am guilty of anthropomorphizing AI, but thatâs partly from lack of descriptors.
I see most apply the label âArtificial Intelligenceâ to mathematical models that are otherwise labeled as âblack boxâ models.
More specifically, the âclassicalâ models had the advantage of knowing how much the âoutputâ changed by a (small) change in the input without doing all of the calculations of the model. With âblack boxâ models, you canât do that.
The label âArtificial Intelligenceâ just sounds better than âblack boxâ because the latter connotes âI donât know why it worksâ while the former conveysâor intends to conveyââI donât need to know why it works.â
Interesting article on this topic (free link below) about coding boot camp graduate opportunities going away. One thing they mentioned has been happening more and more for some time - the automation of entry level tasks. This means that thereâs less opportunity to gain experience on the job, as the experienced employees are using AI/automation to do tasks formerly performed by entry levels. So, how do you get experience?
So you think thereâs some sort of magic associated with medical training?
No, I mean they have knowledge about the human body that enables them to practice efficiently and make better decisions.
I think that in this case GraphCast is really behaving as a function approximation. This is a standard application of machine learning. You approximate a very computationally expensive function with a much cheaper one.
In this case, it is trained on all the inputs and outputs of the physics based models. In that sense, the âscienceâ is already done.
As I understand it, the right part of our brain is usually the part that specializes in pattern recognition. This is what a chess grandmaster uses to recognize piece positions, for example. Because the right side of the brain does not speak, we are not able to communicate to ourselves what, exactly, we are doing.
We similarly have other parts of our brain that take the two dimensional images seen by our eyes and try to infer three dimensional information from them. Again, this is done outside of our conscious awareness.
In both cases, lots of data is âmechanicallyâ reduced to relatively simple concepts that we can hold in our conscious minds for more advanced reasoning.
Neural nets can perform a similar function by inferring, for example, a classification of an object from the large, correlated data in a picture.
Looking at a scan for a specific type of thing is more like playing whereâs Waldo than solving math problems from axioms.
No, understanding anatomy makes you better at it. Ffs, this is not hard to understand.
I think you have a very romanticised view of medicine.
A lot of it is very repetitive. AI can help with that.
Not surprisingly, what you propose is immensely difficult. Among healthy humans thereâs a great deal of anatomical variation thatâs perfectly healthy and normal. How do you decide when a variation is sufficient to suggest pathology?Computational Anatomy and automated diagnosis from medical images has been an active area of research among academics for 30-40 years (see Grenander and Miller and Mumford and Bookstein, to name but a few of the theorists).To get an idea of how hard it is simply to compare two 3D images, do a wiki search for âmedical image registrationâ, which is a much simpler component of your proposed system, but far from a solved problem itself. Taking registration to the next level, substructure segmentation entails identifying the relevant anatomy in the image, recognizing that which is abnormal, and doing so reliably given the poor spatial discrimination inherent in any imaging technique as it approaches its limits. Tres hard. Itâs astonishing that human radiologists perform as well as they do. Perhaps thatâs partly because they approach every image with a wealth of context and medical knowledge, and succeed largely because they know exacly what to look for and how much variation is needed to diagnose. That would be really tough for a computer. I know. Thatâs what I do for a living. (I program a computer to assess medical images for a pharma).In my humble opinion, donât expect automated image analysis to replace human radiologist physicians any time soon.
Lol, checkmate.
Nobody has said they would be replaced.
You are moving the goal posts there.
The point is AI will augment their existing expertise.
Meh.
There are also stories like theseâŚ
AI Accurately Identifies Normal and Abnormal Chest X-Rays | RSNA.
Probably too many episodes of House or some other TV series.
Iâve worked a bit with some forensic scientists and they were constantly having to explain to students that TV != Reality.
Okay, letâs say, your wife really wants to know if she has cancer. You see a doc, the doc agrees with you that AI is all tech bro bs. She gets a scan, it comes back negative. She doesnât get a 2nd opinion, because that costs $400, which you donât have.
However, she does send her scan to an AI, because it only costs $1. The AI tells her she is positive. Additionally, there are high quality studies showing AI is better at this diagnosis than people, producing significantly more true positives and fewer false positives.
Do you just ignore the new result?
Getting a free instant second opinion seems like a no brainer.
It gets complicated when you think about longer term if that becomes the only opinion or if the quality of the human opinion degrades knowing AI is a backstop.
Is there much discussion on an AI âecho chamberâ effect? I recall AI getting dumber on math problems as it interacted with the public and questioned itself. Couldnât that be an issue here as well?
It gets complicated when you think about longer term if that becomes the only opinion or if the quality of the human opinion degrades knowing AI is a backstop.
I kind of feel like itâs up to society, regulations, education, and âinvisible handâ to make sure people arenât lollygagging slackers. I canât comment on how well that works in practice, as I am currently working.