Self-Driving vehicles

ML needs data. The irony is that the more self driving cars we have on the road, and the more accidents we have, the safer the cars become. We’re not really gonna go anywhere if nothing drastic is done like banning manual cars completely.

I am a doubter too.

I like my adaptive cruise controls, BUT…

I also find the level II safety systems onerous at times, even potentially dangerous. On crowded expressways, I go to change lanes. I was taught as a teenager to accelerate into lane changes, to keep ahead of anyone flanking me in my blind spot. I have had the collision controls apply the breaks on me in that situation, slowing me down INTO someone on my flank. This is a crappy situation to put me in when going 70 in bumper to bumper traffic.

probably the only way to get AV to work is to remove all pedestrians from the road. Due to the incredible complexity and high risk of harm of having peds together with traffic, the level of AI needed would be close to that of an actual human

The things you remember from the old AO are interesting.
I remember sunshine and roses. Probably because I was never as wrong as you were, but I digress, because I agree with you on this.
I would rather not have any electronics taking over my controls (or, worse, fighting me for control). To me (and I guess to others), it relieves me of responsibilities, making me lazy. And I don’t think I want a nation full of lazy drivers. That commercial with the Hyundai/Kia (yes, I cannot tell them apart, 'cause racist toward cars), where some driver is looking at her phone instead of driving, and the car is drifting out of the lane, and the car fixes that: “Cool! I can look at my phone MORE now.” They ought to end that commercial with that driver getting a ticket for illegal behavior instead of encouraging illegal behavior.
Grumpy Old Man Rant over.

It’s sort of like the argument that people will take more risk as you increase safety features because they’re comfortable with some level of risk. So young male drivers might be better off driving around with a big spike sticking out of the steering wheel so they don’t hit anyone going 100 mph.

Re-reading the article, this quote seems to encapsulate the problem with the wildly optimistic predictions.

No, you most assuredly do NOT have to peel back every layer of challenges to see the next layer. If you are working on how to keep the car in a clearly marked lane in optimal conditions you should be thinking ahead.

“What if the lane markings are not clear?”
“What if the lane markings are covered in snow?”
“What if it’s raining or snowing or sleeting?”
“What if something white is on the road that’s not a lane marking?”

The problems are undoubtedly extremely
difficult to solve; I’m not trying to say otherwise. And sure, you probably have to methodically solve the problems one step at a time or “layer by layer”. But you don’t need to have a crystal ball to see what the problems are going to be.

Many of the problems are extremely easy to predict. I mean, there may well ALSO exist problems that are difficult to predict. But there certainly exist, and have existed for a long time, a staggeringly large number of easy-to-predict-but-unsolved problems too.

When cars are running into white trucks and people on bicycles loaded with bags and plowing through stop signs partially obstructed by trees and having trouble merging into the Lincoln Tunnel… those are extremely easy-to-predict problems.

Now I’m hoping that this quote was somehow taken wildly out of context, but I’m honestly skeptical. For the better part of a decade it has seemed and still seems like the folks working on this stuff have a dangerous lack of foresight.

I mean do they need to hire me to come in and rain on their parade and say things like “um, guys, it’s great that this works in the desert but it has to work in the mountains too” and “road construction is an actual thing that happens” and “sometimes lightbulbs on traffic signals burn out” and “can your vehicle follow hand signals given by a DOT worker or a school crossing guard”? Because they give all outward appearances of being completely unaware of this stuff. Especially when they say stuff like the quoted part above or blame the human backup drivers rather than the technology when their cars screw up.

I guess it’s because I’ve spent so much time testing code over the course of my career that my mind always goes to “what will go wrong” and “what set of circumstances might cause this to not work as we want it to”.

The AV folks act surprised by this stuff. Like they haven’t put any thought into what can go wrong.

And like… if my code to categorize all of our policies to put into the reserving software screws something up then our reserves will be off by a few million dollars one way or the other and my employer’s or client’s profits will be too high one year and too low another. That’s unacceptable to the people who pay me and the people they answer to… but not really on par with killing people.

So a healthy dose of pessimism seems vastly more important with AV than with the stuff I deal with on a daily basis.

(Edited to correct “EV” to “AV”)

1 Like

You don’t work with many data scientists do you? What you describe is the exact way data scientists approach modeling. They seem to hold an unflappable belief that the data will lead them to the truth. They are genuinely puzzled when some sort of event is not in the data, if that type event was ignored as an outlier, or that their model made a choice that is right the majority of the time but not right in all situations.

I actually have worked in a non-traditional role in a big data environment.

But no one died if there were imperfections in our models.

I suppose in hindsight that they really weren’t that concerned about imperfections that were more difficult to solve than was profitable to bother with.

But again, no one died if we made mistakes or our models failed to account for edge cases. A client might pay or charge too much or too little. 98% was plenty good enough in that environment.

One of my roles was to delve into some of the more obviously ridiculous outputs that affected very little but really looked crazy wrong. I got to solve some interesting problems in that job.

Again, interesting problems that were not life-threatening.

I have yet to work with data scientists that have concern for any edge case scenarios. I worked on a project to automate underwriting on one of our products years ago. I found a flaw in the model that would allow a policy to be issued that was not an acceptable risk. I pointed out the flaw and suggested an edit that would prevent it. The problem with the edit was that it reduced the automation rate by “too much”. The people who made the model convinced management to implement with the flaw because the accuracy rate of the model was so high. Six months later I’m getting a call to investigate a policy that just had $750K in claims that was auto-issued. Couldn’t rescind the policy because the applicant disclosed all of their health issues on the application. It might not be life and death but it certainly was costly and it was a case that an underwriter would have rejected after about 5 seconds of review. I am unsurprised that people automating vehicles don’t concern themselves with “outliers” either.

2 Likes

Yeah, that sounds very frustrating. A real “I told you so” moment.

Without going into too much detail about what I did, I think the kind of errors that we were letting slide by did not have that kind of money attached to them. Like our estimates of the whole market were off by a little bit so therefore our estimates of our market share were off by a little bit.

But if we were suddenly interested in delving into a new area, for which our calculations were not very good… then management did recognize “we gotta get that cleaned up before we go too much further”.

Then I’d look into it, figure out what the issue was, add a few more “case… then” statements to our code to fix the more glaringly wrong stuff and that would be that.

1 Like

Ugh, I’ve worked with these people. My last company did stuff like this, fretted over automating things and getting a slick dashboard, actually getting the analysis right was a very distant third priority. Totally backwards.

I think part of the problem is that optimism is very important to innovation.

Skepticism is very important to scientists, and also to safety.

These two virtues come to be in tension with each other.

I’d argue that, regardless of their title, those data scientists were acting as innovators, not scientists.

I do think that “data science” has this odd tension in it, in which in some cases it is more science, and in other cases more engineering. Perhaps we see this in its two “parents”: statistics is concerned with knowledge (at least a lot of the time), while machine learning is more concerned with pattern recognition which i consider more engineering.

6 Likes

Weird Al could write a good parody of that. Make it that stupid-looking Tesla truck, falling for an obviously bigger (YKWIM) semi. The lyrics write themselves!

Could be a sequel to this:

1 Like

So I was driving to the chiropractor today, going 70 mph on the interstate when my 2020 vehicle made a beeping noise to let me know that none of the automation features were working. I couldn’t set the cruise control or use the self-steering thing. The radio / navigation/ infotainment system stopped working. No apparent reason for this. It came back on in about 3-4 minutes, and I could still manually operate the car.

But stuff like that just can’t happen in an AV. I was on a turn when it happened… there would’ve been no way for an AV to safely come to a stop while it reset itself. An AV would’ve crashed right then & there.

Me living without level-2 features and the infotainment system for 3-4 minutes is no biggie. But for a blind or drunk or napping person it would’ve been catastrophic.

2 Likes

I guess I’m stuck with my old car forever, as I do not want any help driving. Some people welcome it, and it might be for the greater good for them to get some assisted driving. But I’d rather do all the driving myself.

I remember at a CAS conference a discussion about the different levels of autonomous cars and it seems apparent that it’s great up to a point, say level 2 like in your situation, but you’re almost worse off at the near-autonomous but not quite levels. If the car works 99% of the time but needs the guy who has been watching Seinfeld in the back to suddenly take over when the weather is very bad that doesn’t sound like an improvement. From automatic braking and those kinds of safety nets I think it’s an all or nothing deal with a car that’s autonomous, or at least all or nothing on certain roads (for example if it could do highway 100%).

1 Like

Yeah, I think level 2 is about the limit of what’s safe. I mean, maybe I need a refresher on what levels 3 & 4 are.

I suppose something that works 100.00000% of the time on the interstate but needs you to take over when you exit would be ok. You couldn’t be drunk or blind or an unlicensed tween, but you could text or put on your makeup or check GoActuary.

Now this car has around 15,600 miles at this point and this is the first time it’s done this for no apparent reason. The infotainment system has shut down for no apparent reason before but that’s not mission critical. The level 2 automation has shut down in the rain before, but that wasn’t for no apparent reason. It couldn’t see well enough to drive, which is a reason. So not counting the rain (which they’ll obviously have to figure out) there’s only been a tech failure 0.006% of the time.

But that’s like, way worse than a Ford Pinto.