Artificial Intelligence Discussion

Do self driving cars learn in a way that they can know an area well? Obviously they can be programmed for unique situations that exist there.

The report from the AI should include its justification for a positive or negative result which you use for a second opinion or reevaluation.

If it just says positive or negative that is worse than useless.

But an edge case to an AI can often be an ā€œeasy caseā€ for a human being. that example by Andrew Ng is perfect illustration. I think this is essentially the problem with AI vs human. Itā€™s plausible that AI is slightly better than human beings on most of the distribution of inputs but failed so god damn miserably in the tails.

Yes, like I said, I think your concerns are valid, but they also have well defined boundaries.

In the case of cars, for example, the fatal edge cases must occur at some rate, which is either more or less than the rate that humans kill each other. Thatā€™s not outside the scope of statistics.

We could also return to my hypothetical. Your wife is afraid she has cancer, so she gets a screening. The doctor says she has no cancer. You donā€™t feel like you have the money for a second opinion. However, your wife uploads her image to an AI, and it says she has cancer. Do you do any kind of followup or just ignore it?

I donā€™t know that the boundaries are that well defined.

It probably is closer to well defined in your hypothetical case of the cancer diagnosis. The population that is the basis of inference can probably be reasonably well defined, and assumed to not change too fast.

I donā€™t see how this is the case for the driving. In different language: how do we define the difference between interpolation and extrapolation in that case?

A causal model would help. If we are designing a combustion or electric engine, for example, we have a pretty good idea of the physics that constrain it, hopefully.

We donā€™t have that for a ā€œlearning engineā€.

Yeah, we ran into that in the refining applications we looked at back in the day. The boundaries are the important part in a lot of applications. You donā€™t want your refinery units to blow up, obviously. But all your training and test data almost certainly involves it NOT blowing up and it is a horrible predictor anywhere near anything dangerous.

Sorry, I guess Iā€™m being vague. By boundaries I mean we can formulate the problem in practical and statistical terms. With a medical screening itā€™s really just taking a sample and estimating false positives and negatives. In that case it doesnā€™t matter if an outlier is causing the error. So long as the application is controlled.

With driving you might not be able to predict the edge cases until after they happen, but you can still sum up the accidents, injuries, deaths per billion miles, with a given location or condition, and make safety decisions based on those facts.

With a medical screening itā€™s really just taking a sample and estimating false positives and negatives.

No, this is wrong.

For your question, I already answered you above. Getting a second opinion on an existing scan is not expensive. Itā€™s taking the scan thatā€™s expensive. Checking that a scan is normal takes like 20 seconds for an experience radiologist, itā€™s nothing.

Ah, I didnā€™t follow that as a direct reply. Okay.

These will both be estimated using historical data.

You have to infer from that what the rates will be for future data.

That is really hard to do if you donā€™t know what your model is assuming stays constant, as is the case with these deep neural nets.

1 Like

Chatgpt

I guess weā€™ll see how each handles future-proofing?

With radiology I imagine the product in only approved to work in a standardized way that matches the experimental methods. I also expect that if they fail, we will be able to measure that, and sue people, and find products that do work instead. And with a lot of cases we will have enough data to work things out gradually. But who knows I guess?

With cars, they are still constantly encountering novel issues, never mind the future. And there just arenā€™t enough cars at this point to matter. At some point, letā€™s just say 6 years from now, they are in every major city and on highways, driving a few billion miles a month. Could something about the world or in a city change, causing a fatal accident? Something totally innocuous like a road pattern or a billboard or a lamp post? Yes, for sure. But if itā€™s just a couple deaths, followed immediately by shutting down the regional traffic, then who cares?

Maybe itā€™s just lack of imagination on my part, but I have a hard time picturing a worst-case scenario that moves the needle on car-safety, besides maybe a cybersecurity incident?

ā€œI have spent my whole professional life developing robots and my companies have built more of them than anyone else,ā€ he writes in his scorecard, ā€œbut I can assure you that as a driver in San Francisco during the day I was getting pretty frustrated with driverless Cruise and Waymo vehicles doing stupid things that I saw and experienced every day.ā€

2 Likes

Not to be glib, but the loved ones of the people who died will care.

And to what gain? A slight gain in convenience? It would only be life changing for those poised to make a lot of money off it.

I do think it is technology worth pursuing. But it needs to be engineered to be safe, just like any other new technology.

I guess I wasnā€™t clear there, I think the gain is safety. If we get a couple deaths due to some weird bug in a few years, it will be fewer than the lives saved in those same years.

Maybe thatā€™s a bit of a trolley pull, but I will feel safer with my child on a bike knowing that there is a 99.99% reliable robot on the road rather than some 99% teenage monkeys.

2 Likes

I did misunderstand.

The problem in my mind is that the difficulty in identifying edge cases also makes it difficult to estimate whether safety has really improved.

This goes back to extrapolating future safety rates from historical data.

For Waymo, Iā€™m most nervous about the process around now. Humans kill each other every 100M miles or so. And Waymo is pulling around a million miles a week. It is apparently driving very carefully, and in terms of accidents and injuries statistics, it is kicking butt. But it only has only gone like 40M miles? It needs make it go several times that distance before we even get an Expected Death of 1. So it is proving itself right now-- this year and next year.

Supposing Waymo makes it for a few years, and it starts driving a billion miles a week, then numbers of lives saves I think add up to a lot very quickly, in my mind anyway. It will be a lot safer. And a fatal bug wonā€™t change that.

Conversely, I would say Uber, and arguably Tesla and Cruise jumped the gun, and it is too late for those injured or dead.

As of now, Brooks notes, ā€œthere are no self driving cars deployed (despite what companies have tried to project to make it seem it has happened).ā€

I find this to be the much more interesting point. As of now Waymo still has a lot more employees than cars. They need to prove that can change before they can be called self-driving in a practical sense.

I agree. But that is a big ā€œifā€. Also, my guess is the kind of driving done by waymo, in city driving, has much lower fatality rate.

They can probably try to extrapolate from ā€œnear missesā€. But we have to trust them to do that. I donā€™t particularly. But i trust them a lot more than tesla.

This is part 2 of the AI promise. All this technology has to work well enough to justify the enormous economic investment. From what iā€™ve read, this means it has to replace humans, not just augment, at least in the case of LLM with their enormous costs for training and even deployment.